Im building a prompt manager, with multiple flows for saving prompts from web to notion (and supabase when prompt is larger than 2000 chars) , generating prompts from abroad outline, allowing user the refine existing prompts to add more value and allowing user to look up existing prompts in database thru keywords, in most flows i was using openai nodes , that i tested with 123 prompts, suddnenly, they have gone bonkers . i have an ai node which extracts prompts from a blog chunk and then passes it to another ai node, which analyses the prompt to generate meta data liek tone, tags, etc. Now suddenly it has started shortening prompts. thsi was working before
Hello @Swati, can you attach your workflow here pls?
I think it’s because you parse it wrong (in the code node or in the OpenAI node)
Apologies, this is my flow to save prompt from web , to notion db, if the prompt is larger than 1900 chars, it first saves metadata of the prompt in notion like a placeholder, with a tag of "OVERFLOW:, saves the actual raw prompt found on web to supabase , along with notion records page id, and then backfils notion page with supabase record id; the open ai node for extract metadata , was to generate metadata like, role, tone, tags, notes , category etc related to the prompts extracted via an article or blog on web via the first openai node. the flow and the user message to the extract node screenshots are attached. Both ai nodes are suddenly behaving erratically, mainly the extract node, which is shortening my original “raw_prompt”, because of which I’m losing the original prompt, and my flow of overflow (9prompts larger than 2000 characters are no longer is being invoked. this was all working 2 days before. I tried using, claude model with an LLM instead but it did worse. I don’t come from a coding or ai background, any advice on what ai to use , or how to fix this
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.


