Is it possible to call a huggingface dedicated endpoint in n8n?

As the question says, I’m wondering if the huggingface inference module on n8n can be used to invoke a dedicated production endpoint. Or this has to be don through an HTML request?

Thanks in advance.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey, sorry, kind of new here:

  • n8n version: 1.39.1
  • Database (default: SQLite): Default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Cloud
  • Operating system: Windows 10

@Extempore_Educacion , yes, you can use a dedicated endpoint. Both of the Hugging Face nodes - “Hugging Face Inference Model” and “Embeddings HuggingFace Inference” - have the option “Custom Inference Endpoint”.

2 Likes

Thanks a lot, it was there all along, didn’t see it.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.