Koong
March 25, 2025, 9:53am
1
I deployed a model from Hugging Face using Google Model Garden, and I want to connect it to the Google Vertex AI Chat Model node for inference. However, I am unable to establish the connection successfully.
Has anyone managed to integrate a custom-deployed Hugging Face model with Vertex AI Chat Model node? If so, what are the necessary steps or configurations to make it work?
Any guidance would be greatly appreciated!
Hey Koong, let’s try getting that Hugging Face model working with Vertex AI through n8n. Here’s the idea:
First, check if the model is properly deployed in Vertex:
Go to Vertex AI → Model Registry
Make sure the model has an active endpoint
Grab the Project ID, Region, and Endpoint ID
Try the Vertex AI node (if it recognizes the model):
Use a credential with Vertex AI User permissions
Fill in the model ID and your project info
But if it’s a custom model, the node might not detect it
Plan B: Use a raw HTTP Request to the endpoint:
Set the URL like this:
https://{region}-aiplatform.googleapis.com/v1/projects/{project-id}/locations/{region}/endpoints/{endpoint-id}:predict
Headers:
{
“Authorization”: “Bearer YOUR_TOKEN”,
“Content-Type”: “application/json”
}
Body:
{
“instances”: [
{ “content”: “your input here” }
]
}
Before plugging it into n8n, test it with curl or Postman to make sure the endpoint responds properly.
If you need help formatting the request or setting up the auth, just let me know.
Dandy
system
Closed
July 2, 2025, 1:21am
3
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.