Running n8n via (Docker, npm, n8n cloud, desktop app): Railway
Operating system:
I’m really having a tough time wrapping my head around the custom tools for LLM chains. In my mind, having n8n workflows available is game changing, I just need to make sense of how that’s going to work.
I’ve had a pretty well developed OpenAI Assistant for awhile that I’ve been calling via HTTP node. I’d love to give that assistant the ability to send emails and reply to emails. Do those workflows need to be on the same workflow as the Assistant chain? How am I passing data to the gmail nodes (who to email, what to say, etc)?
Not sure if you have seen this but we have a tutorial now on building ai workflows in n8n but the important part is it covers building a workflow using the custom n8n workflow tool node and provides a pretty good example that could be useful, Give it a go and let me know if it helps answer your question.
I have been reading that to understand the concept. Here’s my main idea…
I have an Assistant that I’ve been building for awhile. It will receive emails and process the info against its knowledge base. I’d love for it to be able to use gmail to reply to an email to gather more information. In turn, once its satisfied, I’d like it to create a task in ClickUp.