🚧 Ask AI updates: HTTP Node helper (beta), and self-hosted!

Also still having the same issue. Running on docker behind traefik.
Which other information could we provide, so you have a possibility to reproduce this issue on your side?

I updated to 1.41.0 today and decided to try the Ask AI funtion. I can’t get it to work. I am using the selfhosted version. I get these errors.
image
Did anyone figure out what model and API we need to use.

I used EasyPanel for n8n deployment, but I don’t know how to enable the Ask AI feature. I’ve already installed version 1.41.1.

Hey all, to get access to the Ask AI Feature in the HTTP Request node on self-hosted, you only need to set the N8N_AI_OPENAI_API_KEY variable with a valid OpenAI key from you. In the best case this API key is having access to the gpt-4-turbo model. Otherwise, you might need to change the N8N_AI_OPENAI_MODEL environment variable.

Does that work for you?

Thank you, Niklas, for your reply. Could you please let me know where I can find this variable to modify it? Specifically, which file contains it and what is the file path?

If there is any video explaining this point and how to solve that issue , it would be very helpful. Thanks a lot!

Hey Ahmed, you have to start n8n with the env variables. You can find more here: Environment Variables Overview | n8n Docs

1 Like

It still does not work.
Updated to version 1.42.0, only set N8N_AI_OPENAI_KEY and N8N_AI_ENABLED. Then I can only see the button in the Code Block but it keeps returning the same unknown error.

Which other information could we provide, so you have a possibility to reproduce this issue on your side?

Ok so apparently you need to have a paid API account. Once I switched and loaded a few bucks on there, it seems to work. But the request used Turbo4, which costs more tokens. Is there a way to change the model to 3.5 as its quite a lot cheaper?

thank alot its working now, i added the N8N_AI_OPENAI_API_KEY in Enviroment

Now it works for me on the http request node, but not on the code node :smiley:

And asking AI returns A call to https://api.pinecone.io/indexes/api-knowledgebase returned HTTP status 404.

Hi, I’m trying to add the feature on self-hosted and it doesn’t come up for me at all. I’ve set the following:

  • N8N_AI_ENABLED=true
  • N8N_AI_OPENAI_API_KEY={MyAPI}
  • N8N_AI_PROVIDER= openai

I don’t see the feature anywhere when i restart.

Thanks a lot for the revolutionary feature, and thanks to the entire n8n team.

It works in HTTP requests but not in code. I used the following settings:

N8N_AI_ENABLED=true
N8N_AI_PROVIDER=openai
N8N_AI_OPENAI_MODEL=gpt-4-turbo
N8N_AI_OPENAI_API_KEY=OPENAI_API_KEY

I also tried replacing the OpenAI model with gpt-3.5, gpt-4, and gpt-3.5-turbo, but none of them worked.

Hey @bartv , this feature sounds awesome, but could you perhaps ask the developers to add one small thing?

Specifically, could we be able to set something like a N8N_AI_OPENAI_BASE_PATH value?

Specifically, I use an OpenAI Compatible Proxy called LiteLLM to interact with OpenAI endpoints. More details are here:

Essentially, the proxy provides an “OpenAI-compatible” drop-in replacement API and then you can point the proxy to over 100+ LLMs on the backend.

That way, n8n team doesn’t have to spend time supporting many, many different LLM flavors… all you’d need to do, is allow n8n operators to specify a different base path in the code, and then the proxy handles the rest of the heavy lifting.

I talk with the LiteLLM developers regularly, so if your dev team needs help getting that to work, let me know and I’m sure the LiteLLM team can quickly fix any compatibility issues. (So far, I’ve used the proxy on over 10+ other OpenAI-based apps, and it works just fine.)

Hope that helps!

1 Like

Maybe you could try the Ollama node. There you can enter the base url for your litellm url. Someone used it to connect Cloudflare Worker AI to connect to and it worked.