The idea is:
" Problem running workflow
Please execute the whole workflow, rather than just the node. (Existing execution data is too large.)
This is a pain to come across when developing a workflow that fetches and transforms data, only to have to restart again because the final node fails. Instead of restarting, we should be able to just re-run the node that failed and any subsequent nodes afterwards. Why can’t the state of items up to the point of failure(i.e the memory state of items from the proceeding node) be saved to disk, and re-start the workflow from there.
I find it so wasteful having to remake all the same LLM API calls and transformations, just to test a change at the end of the chain.
My use case:
I have a workflow that fetches and evaluates and summarises news articles with a LLM model to then post on a social media platform. In making changes to the final posting step, I can’t just resume the workflow from the posting step, I have to restart the whole workflow, which eats up API credits, but more importantly, costs me time as those initial steps take a few minutes to be completed.
I think it would be beneficial to add this because:
will cut down on development time of complex workflows that fetch and transform large volumes of data,
Any resources to support this?
[Error on partial execution (too large data for it)]
Are you willing to work on this?
possibly