Every time I update n8n (locally installed community version) all Scrapeless community nodes become unrecognised and the only solution is to uninstall/reinstall the Scrapeless node - and add/reconfigure every instance of it being used in any workflow.
Is this normal behaviour for community nodes, or is it avoidable - by me doing something, or by Scrapeless if it’s brought to their attention?
Hi @selbrae, are you installing as standalone or in a docker container? If the latter, just make sure your data is persisted and you can easily spin down and recompose your containers without losing anything. For standalone, just make sure you’re copying your original .env file into the install folder, especially if you just use git to get the latest updates and it inadvertently overwrites.
I should have included that this instance is in a docker container - and I do have persistent storage and I guess it is working as I’m using SQLite for some workflows and I can see the db there.
However, the nodes subdirectory in that volume contains only package.json, and the contents of that file are only:
I installed another community node just to see if it behaved differently from the Scrapeless one, but no changes to that file so I’m wondering if that is where the problem lies …