Capturing n8n flows with Observability & Evaluations platforms like Langfuse, Comet Opik, etc

Dear n8n Friends:

We would like n8n to offer easy integration with Observability & Evaluation platforms such as Langfuse and Comet Opik, which enable instrumentation of n8n workflows, be it a subset workflow or end to end.

I think each of these offer OTEL interfaces, which is a common open source standard for telemetry.

Apart from helping n8n users test, debug, and run evaluations on workflows, this critical missing piece would make n8n stand apart from competitor drag & drop builder offerings.

See this separate conversation, and thank you:

Langfuse devs are already 100% on board just waiting for someone at n8n to wake up and acknowledge it they would be more than happy to work with n8n

8 Likes

100% - need tracing & evals for working with LLMs

2 Likes

Hey

I’m part of the team at Comet, thanks for calling us out. We have actively started on an integration between Comet Opik with n8n. You can follow the issue on Github n8n Integration · langfuse · Discussion #4397 · GitHub

Thanks
Vince

2 Likes

Hi everyone,

Langfuse maintainer here. We are also happy to contribute to this effort. Tracking interest for this integration here: n8n Integration · langfuse · Discussion #4397 · GitHub

I would suggest to use OpenTelemetry to allow for interoperability with different observability and evaluation backends. Thereby, n8n users could configure their OTel endpoint and authentication in the UI (or as an env when selfhosting n8n) and it should just work.

A good read on this: AI Agent Observability - Evolving Standards and Best Practices | OpenTelemetry

Let me know if I can be helpful to figure this out.

13 Likes

This is what we need. :sunglasses:

1 Like

UP! :grimacing:

N8N TEAM? :rocket:

2 Likes

Langchain has a couple of .env nodes to connect to Langsmith

I’m wondering why is it not possible to put Langfuse URL endpoint instead of Langsmith? This seems to be an obvious solution, but since nobody offered this I assume there is an obstacle, is it?

3 Likes

I hacked this to work with Helicone and Langfuse. Works with n8n cloud and doesn’t require you to self host and create a custom node.

Here is what you can do:

Step 1 - create an API proxy which will sit between a call from n8n and the LLM and observation platform (Helicone, Langfuse, etc).

  • Create a new proxy API with whatever cloud platform you work with or deploy with FastAPI. I use GCP Cloud Functions. This API can accept /models and /chat/completions end points, which is what n8n uses to authenticate and route model requests when using the OpenAI node (we’ll be modifying this later to make it a node for our proxy)
  • This API will simply take the body from n8n request, and forward it over to the Helicone URL or you could route it directly to OpenRouter. I opted for OpenRouter with Helicone so I can easily send different models vs just OpenAI. If you use LangFuse, this is also where you could add your tracing code.
  • I inject the necessary headers to make Helicone work but other than that the body is passed over from n8n with no major transformations.
  • You have to handle SSE streaming responses since now n8n is sending this flag as true which was surprising to see when I logged the json sent from n8n.

Step 2 - configure your new API as the base URL in the OpenAI node

  • Create a new OpenAI auth credential in n8n. Change the base URL to your API URL that you just created above. It will attempt to auth with /models which will be appended to the base URL. As long as you handle that and return a 200 response, it will work. You could put in whatever you want for the API key here.

Step 3 - add OpenAI node to your agent or LLM chain

  • Now create an OpenAI node and connect to one of your agents. Select the new credential you just created.
  • Change the model name to match the open router naming conventions => openai/{model_name}
  • Since we’re routing this in our proxy to open router then this node is now really an open router node that send data first to our proxy.

Step 4 - optionally pass a session id

  • Add the response format option in the OpenAI node and set it as an expression. Pull in the session id so you can pass this along to Helicone or Langfuse. This is a bit hacky no doubt but no other way to send in other data unless you wanted to start parsing the system or user prompt. I just strip this out and don’t forward it along and then save the session id and map to the Helicone session id header.

Thats pretty much it. Here’s the sample code for your proxy API in JS that I use to make this work with Helicone.

/**
 * Responds to HTTP requests for /models and /chat/completions paths.
 * Handles SSE streaming for /chat/completions if stream: true is in the request.
 * Sends Helicone-Session-Id and other Helicone-specific headers using request-specific headers with client.chat.completions.create().
 *
 * @param {!express:Request} req HTTP request context.
 * @param {!express:Response} res HTTP response context.
 */

// Import the OpenAI library.
// Make sure to add "openai" to your package.json dependencies.
const OpenAI = require("openai");

// Initialize the OpenAI client.
// IMPORTANT: Set the OPENROUTER_API_KEY environment variable in your Cloud Function's configuration.
const client = new OpenAI({
  apiKey: process.env.OPENROUTER_API_KEY, // Env variable that stores my key
  baseURL: "https://openrouter.helicone.ai/api/v1", 
  defaultHeaders: {
    "Helicone-Auth": "Bearer {{YOUR API KEY HERE}}" // Replace with your actual Helicone API key. I was working fast and didn't move this to env var yet  }
});

exports.proxy = async (req, res) => {
  // Log the request path for debugging
  console.log('Request path:', req.path);
  console.log('Request method:', req.method);

  // Enable CORS for all origins (consider restricting in production)
  res.set('Access-Control-Allow-Origin', '*');
  res.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
  // Ensure 'Content-Type' and 'Authorization' are allowed, plus any custom headers like 'Helicone-Auth'
  res.set('Access-Control-Allow-Headers', 'Content-Type, Authorization, Helicone-Auth, Helicone-Session-Id, Helicone-Moderations-Enabled, Helicone-LLM-Security-Enabled, Helicone-Session-Name'); 

  // Handle OPTIONS requests for CORS preflight
  if (req.method === 'OPTIONS') {
    res.status(204).send('');
    return;
  }

  if (req.path === '/models') {
    // Handle requests to /models path
    console.log('Handling /models request');
    res.status(200).send('Success: /models path hit');

  } else if (req.path === '/chat/completions') {
    // Handle requests to /chat/completions path
    console.log('Handling /chat/completions request');
    console.log('Original received payload for /chat/completions:', JSON.stringify(req.body, null, 2));

    if (!process.env.OPENROUTER_API_KEY) { // Check for OPENROUTER_API_KEY
        console.error('OPENROUTER_API_KEY environment variable is not set.');
        res.status(500).send('Server configuration error: Missing API key. Please set the OPENROUTER_API_KEY environment variable.');
        return;
    }

    // Extract response_format.type into session variable, and remove response_format from forwardedBody
    let session = null;
    const { response_format, ...forwardedBodyWithoutResponseFormat } = req.body;
    
    // Headers for this specific outgoing request
    const requestSpecificHeaders = {
        'Helicone-Moderations-Enabled': 'true',
        'Helicone-LLM-Security-Enabled': 'true',
        'Helicone-Session-Name': 'Agent Demo'
    }; 

    if (response_format && typeof response_format === 'object' && response_format.type) {
      session = response_format.type;
      console.log('Extracted session ID (response_format.type):', session);
      // Add Helicone-Session-Id to the outgoing headers for this specific request
      requestSpecificHeaders['Helicone-Session-Id'] = session;
      console.log('Helicone-Session-Id will be added to outgoing request headers:', session);
    } else {
      console.log('No response_format.type found or response_format is not as expected. Helicone-Session-Id will not be sent.');
    }
    
    console.log('Payload after removing "response_format" (if present):', JSON.stringify(forwardedBodyWithoutResponseFormat, null, 2));

    // Use the modified forwardedBody for the API call.
    const requestBody = forwardedBodyWithoutResponseFormat;
    const isStreaming = requestBody.stream === true;
    
    // Prepare options for the client.chat.completions.create call
    const clientOptions = {};
    if (Object.keys(requestSpecificHeaders).length > 0) {
        clientOptions.headers = requestSpecificHeaders;
    }
    // If you have other per-request options (e.g., query parameters, idempotencyKey), add them to clientOptions.
    // clientOptions.query = { ... };
    // clientOptions.idempotencyKey = 'my-key';

    try {
      if (isStreaming) {
        // --- Streaming SSE Response using client.chat.completions.create ---
        console.log('Streaming request detected. Initiating SSE response.');
        console.log('Outgoing request options (stream):', JSON.stringify(clientOptions, null, 2));

        const completionStream = await client.chat.completions.create(requestBody, clientOptions);

        // Set SSE headers for the response to the client calling this proxy
        res.setHeader('Content-Type', 'text/event-stream');
        res.setHeader('Cache-Control', 'no-cache');
        res.setHeader('Connection', 'keep-alive');
        res.flushHeaders(); // Send headers immediately

        for await (const chunk of completionStream) {
          const sseFormattedData = `data: ${JSON.stringify(chunk)}\n\n`;
          res.write(sseFormattedData);
          // console.log('Sent chunk:', JSON.stringify(chunk, null, 2)); // Verbose: log each sent chunk
        }
        res.write('data: [DONE]\n\n');
        console.log('Stream finished. Sent [DONE].');
        res.end(); // End the SSE stream
      } else {
        // --- Non-Streaming JSON Response using client.chat.completions.create ---
        console.log('Non-streaming request detected. Sending full JSON response.');
        console.log('Outgoing request options (non-stream):', JSON.stringify(clientOptions, null, 2));
        
        const completion = await client.chat.completions.create(requestBody, clientOptions);
        
        console.log('OpenRouter API response (non-streaming):', JSON.stringify(completion, null, 2));
        res.status(200).json(completion);
      }
    } catch (error) {
        console.error('Error during OpenRouter API call:', error.message, error.stack);
        if (error.response) { // Axios-like error structure from OpenAI SDK
            console.error('OpenRouter API Error Response Status:', error.response.status);
            console.error('OpenRouter API Error Response Data:', JSON.stringify(error.response.data, null, 2));
             if (!res.headersSent) {
                res.status(error.response.status || 500).json(error.response.data || { message: error.message });
            } else if (!res.writableEnded) {
                 res.end(); // End stream if error occurs after headers sent
            }
        } else if (!res.headersSent) {
            res.status(500).send(error.message || 'Error processing your request with OpenRouter');
        } else if (!res.writableEnded) {
            // Try to send an error event if possible, though this might not always work reliably for streams
            try {
                const errorPayload = JSON.stringify({ error: { message: error.message || 'Stream error' }});
                res.write(`event: error\ndata: ${errorPayload}\n\n`);
                console.error('Attempted to send error event to stream.');
            } catch (e) {
                console.error("Failed to write error event to stream:", e);
            }
            res.end();
        }
    }
  } else {
    // If the path is not recognized, send a 404 Not Found response
    console.log(`Path ${req.path} not found`);
    res.status(404).send('Not Found: This function only responds to /models and /chat/completions');
  }
};
1 Like

Hi everyone,

I am part of the team at Future AGI . We have an opensource tracing library compatible with opentelemetry named Traceai. We are very happy to integrate n8n with our platform. Follow the issue here if someone wants to contribute: Enable AutoInstrumentation for n8n workflows to send traces · Issue #97 · future-agi/traceAI · GitHub

1 Like

Hi everyone,

I’m a maintainer of MLflow, a tool that provides observability/evaluation solution similarly to Langfuse and Opik. Huge +1 to adding the OpenTelemetry based instrumentation. It will allow users to plug-in their favorite Ops platform into their workflow, and deploy it to production with confidence. We are happy to contribute if this is something n8n team is interested in!

1 Like

I am a maintainer of Arize Phoenix and we are also willing to support!

2 Likes

Deep n8n Observability with OpenTelemetry + Langfuse

Hey n8n community!

Many of us who self-host n8n have been looking for deeper insights into our workflow executions. Building on the fantastic foundational work shared by @stuartjohnson and the valuable iterations from @muxxi in their original OpenTelemetry post, I’ve put together a proof-of-concept specifically for Langfuse.

This guide provides all the code and a step-by-step tutorial to instrument your self-hosted n8n instance to send detailed traces directly to Langfuse. This is especially powerful for anyone building AI workflows, as it maps n8n nodes to Langfuse observation types (generation, agent, tool, etc.).

The result? You get a detailed trace for every workflow run, showing each node as a child span, complete with metadata and I/O.

You can find all the details on Github:

I hope this helps others in the community get the detailed observability they need. But most of all, I hope this will inspire the N8N development team, we need this ASAP! :rocket::smiling_face_with_sunglasses:

Feedback and improvements are very welcome.

Happy building.

Cheers

Pinging: @marcklingen @jan please have a look at this route, hope this inspires you guys.