Swap smith.langchain for langfuse

besides smith.langchain, you can configure langfuse on n8n to see my interactions

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I don’t see a question here…

Hey @Ruan17,

Do you have any more details on this as the post seems a bit vague.

1 Like

Hey @barn4k and @Jon , I think the questions might be:

  1. Does n8n support direct langfuse.com integration?
  2. If yes, then how does that work?

– Langfuse maintainer here

Happy to help figure out how to best send traces to Langfuse from n8n. We maintain stable integrations for LangChain, LlamaIndex, Haystack and it should be fairly easy to add Langfuse if n8n already supports other solutions. Please ping me if you have led the effort on this or help figure this out.

5 Likes

That’s exactly

It will be great to add this feature @Jon

1 Like

I was able to get LangFuse tracing and prompt support working with self-hosted n8n.

You’ll need to make a custom image and install the langfuse and langfuse-langchain packackes:

FROM n8nio/n8n:1.53.2
USER root
RUN npm install -g \
    [email protected] \
    [email protected]
USER node

and set NODE_FUNCTION_ALLOW_EXTERNAL=* (or list individually allowed npm packages)

Then, you can create a LangChain Code node as usual, using the following as an example:

const { PromptTemplate } = require('@langchain/core/prompts');
const { CallbackHandler } = require('langfuse-langchain');
const { Langfuse } = require('langfuse');

// Langfuse configuration
const langfuseParams = {
  publicKey: "pk-lf-...",
  secretKey: "sk-lf-...",
  // k8s connection url to self-hosted Langfuse
  baseUrl: "http://langfuse.langfuse.svc.cluster.local:3000"
};

// Initialize Langfuse and its callback handler
const langfuse = new Langfuse(langfuseParams);
const langfuseHandler = new CallbackHandler(langfuseParams);

// Main async function
async function executeWithLangfuse() {
  // Get the input from n8n
  const topic = $input.item.json.topic;

  // Fetch the prompt from Langfuse
  const prompt = await langfuse.getPrompt("test-1");

  // Create a PromptTemplate from the Langfuse prompt
  const promptTemplate = PromptTemplate.fromTemplate(prompt.getLangchainPrompt());

  // Get the language model from n8n input
  const llm = await this.getInputConnectionData('ai_languageModel', 0);

  // Create the chain using pipe
  const chain = promptTemplate.pipe(llm);

  // Invoke the chain with Langfuse handler
  const output = await chain.invoke(
    { topic: topic },
    { callbacks: [langfuseHandler] }
  );

  return [{ json: { output } }];
}

// Execute the function and return the result
return executeWithLangfuse.call(this);
4 Likes

First of all, thank you very much for sharing. If it’s not too much to ask, could you share an example workflow from n8n? Because from this code I was only able to extract the prompts, which was already a big step forward.

I tried to install it but couldn’t get it to work. Is there a way to install the LangChain node through the community nodes instead? It would simplify the process a lot for me.

we really need this, the langfuse devs are willing to work with n8n devs to get this going

2 Likes

Second this. Langfuse x n8n would be incredibly powerful, and for your reference, this is the Langfuse Github discussion with the maintainers willing to work on the integration. Hope the n8n team can get the collab going. n8n Integration · langfuse · Discussion #4397 · GitHub

1 Like

Oh yes please !