Hello there!
I just started playing with LLM nodes and I realized about the 4096 token output limitation, which is quiet frustrating if you need no elaborate long contents like articles.
I played around the chunking feature in some of these nodes, but If I correctly understood, this is more to process large amount of data in input, not output.
Then, I came across the idea to split the scopes in more than one LLM node… But maybe there’s a better way to do the same.
What I mean with this? Pretty simple tho, like putting 3 LLM basic nodes each one with it’s own part to do in the article, but with the same input idea:
- Input → Write a pizza recipe
- First LLM → Write Introduction and ingredients needed for *Input
- Second LLM → Write just the second part of the article about *Input starting from here: *First LLM output
- Third LLM → Write the conclusion for the article about *Input from here: *First LLM output + *Second LLM output
- Merge node.
Am I crazy or this sounds to be a good idea?