Discussion about this post

User's avatar
T.D. Inoue's avatar

I hope you find takers. We could use something like this. The problem is real. The issue that I see is that most people using LLMs are using it like Google search. Type a query, get an answer. Maybe go a few turns. close the window.

The continuous, long-term discussion is something the companies actively don't want. The exponential growth of computational complexity. The "it's only for AI companions" thoughts. But you're 100% right. I spent yesterday developing a rigorous paper (posting soon!) and just as we were finalizing, Claude hit that point where it just stops taking prompts. Doesn't compact. Just thinks for a second and stops. So I had to go back a few turns, tell it to save its state, and start a fresh session. Very annoying. fortunately the state captured the gist well enough, and Claude can scan previous sessions to get specifics, but it loses the vibes.

So keep exploring. You might end up having to work with find people here who are experimenting with modifying locally hosted models, and collaborate with them.

1 more comment...

No posts

Ready for more?