Configurable timeout for AI / LangChain nodes

Hi n8n team,
AI / LangChain nodes currently stop after ~5 minutes with no option to change timeout.
For large LLM calls (RAG or summarization tasks) this is too short.
Please add a configurable “Request timeout (ms)” field or an ENV variable (e.g. AI_NODES_DEFAULT_TIMEOUT_MS).
Right now, we’re forced to use HTTP Request node just for timeout control.
Thanks!

The idea is:

Add a timeout configuration option to AI / LangChain nodes (Agent, Basic LLM Chain, etc.) so users can increase or decrease the request duration limit manually.

My use case:

I run long LLM operations (for example RAG retrieval or document summarization) where the model takes 10–20 minutes to respond.
Currently, the AI / LangChain nodes stop at 5 minutes and fail, so I’m forced to use the HTTP Request node instead.

I think it would be beneficial to add this because:

It would make AI workflows more reliable and production-ready.
Users wouldn’t need to bypass AI nodes just to set a timeout, and self-hosted installations could handle long-running chains safely.

Any resources to support this?

Several threads on the n8n community mention the same limitation and request a configurable timeout for AI nodes.
Example: https://community.n8n.io/t/ai-node-timeout-increase/37028

Are you willing to work on this?

I can help test the feature and provide feedback after implementation.

I also need to be able to increase the timeout.