If you haven’t noticed - I certaintly didnt! - the Langchain Code Node got a big update which allows for creating attachable custom language models subnodes! Among many things, this means you can now use tools like Langfuse and I’ll show you how!
Hey there I’m Jim and if you’ve enjoyed this article then please consider giving me a “like” and follow on Linkedin or X/Twitter. For more similar topics, check out my other AI posts in the forum.
In this article, we’ll take advantage of the fact that we’re now able to provide our own custom language model subnode and use it to add Langfuse - an open source observability tool for LLMs. This is only one of many fun ways to demonstrate this powerful n8n node and hopefully inspires more ideas to fellow tinkerers!
Good to know: At time of testing, the custom language model subnode only works/attachable for non “AI agent” nodes; Basic LLM node, Information Extractor, Question & Answer Chain, Sentiment Analysis, Summarization Chain and Text Classifier.
Requirements
- Self-hosted n8n version 1.80+
- Install Langfuse:
npm i -g langfuse-langchain
- Signup for a Langfuse cloud account or you can self-host Langfuse.
- Some coding experience and Langchain familiarity required!
Let’s Code!
- Essentially, we start by adding the Langchain Code node with the “output” type of “language model”.
- Because this is a subnode, our code type is “supply data”. The return type of our subnode will be an instance of Langchain’s ChatModel class.
- Within the “supply data” code textarea, we’ll have to first define our LLM chatModel class to use. In the template, I’ve given examples of the top 3 most popular model providers: OpenAI, Gemini and Anthropic.
- Next, we import our Langfuse library and use a special “CallbackHandler” class provided. This CallbackHandler is especially designed to work with Langchain which is perfect for easy integration.
- Unfortunately with the code node, you’ll need to hardcode your Langfuse auth keys as well your model’s API key in their respective classes.
- Simply adding the callbackHandler to our ChatModel instance’s callback array, is enough to add basic tracing to every call!
Template
The following template demonstrates 3 working examples of integrating Langfuse with OpenAI, Gemini and Anthropic. The beauty of these subnodes are that they can be a drop-in replacement for many production AI workflows!
- Note that Gemini and Anthropic subnodes have extra code in them. It is a known issue that Langfuse has incomplete support for these model’s output and so some key metrics are missing from their traces. This extra code patches these minor issues.
Licence: The following code is released under the MIT licence.
Conclusion
LLM Observability has been something I’ve been meaning to learn more about but I wasn’t a particular big fan of Langsmith. Now that Langfuse can be made a real alternative for n8n AI workflows (just not agents yet!), I could definitely see myself investing more time exploring it my projects and as a value-add for my clients.
I’ll be adding more example templates of other possible use-cases on my creator page soon so do check it out later! Got some ideas of your own? Let me know via DM or [email protected]. Cheers!
if you’ve enjoyed this article then please consider giving me a “like” and follow on Linkedin or X/Twitter. For more similar topics, check out my other AI posts in the forum.
Still not signed up to n8n cloud? Support me by using my n8n affliate link.
Need more AI templates? Check out my Creator Hub for more free n8n x AI templates - you can import these directly into your instance!