OpenAI Chat Model - Support for extra_body Option Please?

In standard LangChain code, it’s possible to pass additional parameters through a Basic LLM Chain node using code like this:

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    SystemMessagePromptTemplate,
)
from langchain.schema import HumanMessage, SystemMessage
import os 

os.environ["OPENAI_API_KEY"] = "anything"

chat = ChatOpenAI(
    openai_api_base="http://0.0.0.0:4000",
    model="zephyr-beta",
    extra_body={
        "fallbacks": ["gpt-3.5-turbo"]
    }
)

messages = [
    SystemMessage(
        content="You are a helpful assistant that im using to make a test request to."
    ),
    HumanMessage(
        content="test from litellm. tell me why it's amazing in 1 sentence"
    ),
]
response = chat(messages)

print(response)

I’d like to be able to pass in arbitrary JSON within the extra_body parameter within the OpenAI Chat Model node within an n8n workflow, but it seems the current node doesn’t support this functionality. Can we please get this additional parameter added to this node? Thanks!

Ref:

Do you want to do this in self-hosted or cloud?

@Arno_Burnuk , self-hosted.

Just built something like this with Claude, expands the OpenAI model - somewhat clunky at the moment, installing a tarball into n8n docker, but it works, attaches three fields to the LLM request so we can use Memori self-hosted. Maybe this can help, so sharing here…

Payload is

{
  "memori_attribution": {
    "entity_id": "user-12345",
    "process_id": "n8n-agent-memori-test",
    "session_id": "d8787d87d87d87d87d8"
  },
  "messages": [
    {
      "content": "You are a helpful personal assistant. You have long-term memory provided by the backend. When the user states a fact about themselves, acknowledge it briefly. When they ask a question, answer based on what you remember about them. Replies should be 1-2 sentences, natural, no guessing. If you have no memory of something, say so.",
      "role": "system"
    },
    {
      "content": "Hello",
      "role": "user"
    }
  ],
  "model": "MiniMaxAI/MiniMax-M2.7",
  "stream": false,
  "temperature": 0.7
}