I built a tool where you input a natural language description of an automation workflow and it generates the workflow on n8n (Generates a JSON representation that you can copy and paste into the canvas)
The purpose of this project was to create an educational example of how to create complex GPT workflows which outputs are deterministic. Hopefully the n8n community and automation community in general will get more involved in AI and some collaboration can be achieved.
I opened sourced the project so that it can serve all of you as a foundation to build complex GPT workflows that include the use of Prompt chaining, Semantic Search through embeddings, and AI deterministic outputs.
Please feel free to send me a Twitter DM (https://twitter.com/JAPozueloM) or comment here if you have any observation or questions.
If more people are interested about this I can make more educational resources like videos and walkthroughs of the concepts. Also, I encourage you to improve the workflow and submit Pull requests on Github to improve it.
Top things to improve:
Add custom logic for CODE nodes (teaching GPT about the internal structure, methods, and variables available in n8n)
Add a DB of parameters that are required for each node, link them to the resource/operation combinations that “show” them. Then create a GPT workflow to fill those in for each node.
Wow, that sounds really cool! Tools like this will make it much easier for non-technical people to create more complex workflows. I was actually thinking of making a simple n8n workflow using the GPT-3 API for something similar.
I’m very impressed by your work - it’s great to see people pushing the boundaries of what’s possible!
Hey @jpm thanks for sharing the repo, great stuff!!!
I think I’ve got it all set up after running all 6 workflows successfully, which has propagated data to my pinecone and firebase.
I’m not getting the greatest results. I was wondering if you could maybe shed some more light on where things could be improved. Namely, it doesn’t seem to fill in much of the nodes, and maybe it doesn’t connect them up properly? Have you noticed this at all?
I’ve also tried changing up the workflows and replit script so that it uses davinci rather than the smaller models.
It seems to improve it slightly, or maybe it makes it a bit more confused… I’m not sure. Happy to share the code. I was considering going to GPT-4 but I assume that will require some further modifications. I’m also wondering if it is due to some of the models being depreciated, though I still have them listed when getting all models so maybe not.
Also, it may be a result of my initial request. I wasn’t sure how to format the call to the webhook of the trigger workflow. Can you give some details on the correct format for that?
It doesn’t fill out the fields because that part was left for the next iteration, but I never got to it haha. The project is quite old, and nowadays there are quite a bit more sophisticated techniques to set up the embeddings and prompts, so I think it might be easier to map out all the fields as of now.
As of the call to the webhook I basically just send the user query in the format of my tallyform, but you can edit it to be mapped to wherever you are receiving the user query.
Send me a DM and I can share another workflow that has some exercises using the different OpenAI and Anthropic models, it might help you migrate the project to GPT4 if you’d like. (The JSON is too long to add to this post)