Ever wondered which of your n8n workflows are using which credentials? Just getting started with AI agents and want to use them for everything?
Well, you’ll in luck! Here’s a neat little trick which uses an AI agent as a search interface over your n8n workflows and their credentials. In this example, you’ll learn:
Utilising the n8n API with HTTP node pagination example
Creating ephemeral SQLite databases with Code node
Building AI Agent with tools - in particular, an SQL tool which uses an SQLite database.
Thanks @sirdavidoff. Ha I didn’t initially look at it that way but now you mentioned it… yes, it is all very meta! Really enjoying using the AI nodes btw - makes it really easy to test out things like this.
Yes, I’m definitely looking to submit a few templates soon
Hey @Jim_Le, loving the ephemeral DB technique! Am I right to conclude then that the DBs are not persistent across executions (as the name suggests)?
Edit: Checked. It looks like the DBs do persist across executions. I guess there are implications then on the volume of data that the instance is going to accrue (for n8n Cloud plans) including:
Increased execution log pruning
Increased likelihood of out of memory incidents.
Can anyone advise if it’s possible to check the instance memory status during runtime? It doesn’t look like the API supports it.
Just wanted to add a slight but important correction that whilst the sqlite instance might appear to persist across executions, it’s really all in memory and does remove itself eventually when memory
management/garbage collection kicks in and no-one should really expect or led to believe otherwise.
Also note that the code node does not have write access to disk and it actually uses a virtual filesystem if I recall correctly.
To answer your question definitively, code node sqlite instances are not persistent across executions.
Can anyone advise if it’s possible to check the instance memory status during runtime?
For self-hosted, you can use the /metrics endpoint (Monitoring | n8n Docs)
For cloud, unfortunately not possible. You can check out the cloud instance limits in the docs - Cloud data management | n8n Docs . Perhaps if you can run your workflow locally and set the docker container specs to match those of the cloud plan, this may help you diagnose any memory limit issues you’re running into.