Workaround for thread freeze /or token limits

@jhull is it better to make a document of the storyform from the text of the thread that froze (or maybe the whole conversation) and attach it to new conversation, or to cut and paste the text into the new conversation? Does it make a difference?

I would probably just download and then add that as a file attachment into the next one.

right, but why? Does that make a difference in the quality of results, as opposed to copying and pasting?

Also, sometimes this occurs across 2 to 5 conversations or more conversations, as they successively freeze, sometimes immediately, sometimes after some work. For example, if I was on the fifth conversation, do I then download the last four and attach them?

One reason I’m inclined to only upload the storyform in it’s so-far developed state, and not all the steps I took to get there in the prior conversations, is that the AI gets more and more overwhelmed by all that unfocused detail. I seem to get better results if it just has a storyform it has to deal with and not the genealogy of how we got there. The more specific you are with an LLM, the better results you get, as you know.

I assume at some point the freezes will become less frequent, but the token limit problem will always remain for a large project. Does adding multiple past generations, just exacerbate the token limit problem?

I’ll probably be done with the storyforming after this week. Since activities around that seem to be the problem right now, I’m hoping I won’t have to deal with this as much.

I understand these are early days, and there are going to be bumps in the road, but I’m working on this pretty much full-time and am deep in the weeds so I need repeatable workarounds and optimal workflows.

Just read your article on Context Engineering, very nice.

https://narrativefirst.com/latest

There is a difference between what the user requests and what the system acknowledges as instructions in the background. A hierarchy exists wherein the system’s instruction and context Narrova sets in the background is held as “more important” than what the user provides.

Since the user can more often than not provide contradictory information in regards to a Storyform in their request, it defaults to holding whatever Storyform is set as context. If you can, I would read the documentation on now narrative context works in Narrova: Context | Dramatica

The current narrative context is indicated within the interface by two icons:

  • an open “storybook” for Story context
  • an “atom” for Storyform context

When you have those set, you don’t need to worry about copying and pasting the Storyform in, and will more likely than not confuse Narrova.

Postmortem on recent issues

Your most recent problems were due to a bug in requests made to examples of storyforms that was introduced during last night’s rollout. Narrova provides ALOT of information when it comes to these requests, and some of our handling of these larger requests was making it impossible you to continue a conversation.

Your issues from last week were due to an error on our part communicating the validating/finalizing process of setting Storyform context within a conversation. One was verbiage – most think of “saving” or “creating” instead of finalizing, so we changed that. You can now simply ask Narrova to save the storyform you are working on now, and it will do so, create the necessary story if needed, and set the new storyform as context for your conversation.

The other was the validation, which could take longer based on the approach we were using during our initial rollout. When a user had several different storyforms discussed within a single conversation (which may or may not be related to your experiences), the system would go back and forth trying to figure out the “one” Storyform across all your different explorations.

Now that Narrova has access to our new Storyform Builder, the validation process for aligning a Storyform with Dramatica theory proceeds smoother and faster.

Most, if not all, of this will be covered in today’s livestream.

1 Like