LangChain Code Node - Throwing Errors

Is there a way to pass an error message from a LangChain Code Node to the error output of a Basic LLM Chain?

Neither return <string> nor throw new Error(‘…’) work. Instead I only get a generic error “Error in sub-node <LangChain-Code-Node-Name>”

Here is a simple setup which has a “throw new Error()” in the LC Code Node:

However it returns only this in the error output of the Basic LLM Chain:

[
  {
    "sessionId": "4ef17ddad4e24d9b8cc649d16d1d017c",
    "action": "sendMessage",
    "chatInput": "Hello",
    "error": "Error in sub-node LangChain Code"
  }
]

Information on your n8n setup

  • n8n version: 1.89.2
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Ubuntu 22.04 LTS

Hi @octionic

See if this solution makes sense to you

Add a Function node after the Basic LLM Chain to process the output and check for errors:

// No nó Function após o Basic LLM Chain
const result = $input.item;

// Verifique se há um erro do nó de código LangChain
if (result.error && result.error.includes("Error in sub-node")) {
  // Substitua pela sua mensagem personalizada
  result.error = "Erro personalizado: " + $('LangChain Code').errorMessage || "Detalhes não disponíveis";
}

return result;

If this suggestion solved your problem, please mark my post as the solution (blue box with check mark) so that this ongoing discussion does not distract others who want to find the answer to the original question and click the heart. Thanks :blush:

That does not work for me. $(‘LangChain Code’).errorMessage is not defined when i try to get it right after the Basic LLM Chain Node.
Could you share a working example?

Love it how AI can be so confident in their wrong answer. :blush:

It’s daytime here in Brazil, so good morning!

I hope you’re doing well.

When I was a child, I had to look for books in the library to study. I admit that there’s no point in having books to study if you don’t know what you’re looking for. And there’s no point in knowing what you’re looking for if you don’t know how to study, ask yourself the right questions, and know how to validate the answers you get from your research.

Nothing has changed from the past to today when it comes to studying, even when using the famous AI.

Today, it’s very necessary to use AI to familiarize yourself with technological subjects, however, there’s no point in asking a question to AI or even studying with AI without knowing how to validate and check what the AI ​​came up with as an answer.

Having the parameters of a good PROMPT (knowing how to ask) and validating the AI’s answer (as if you were in the library, to understand the passage that the book presents as an answer to the question) are fundamental.

So what the respectable and very intelligent @BramKn said is very true. It is useless to use any AI response without doing parallel research on the subject and validating what the AI ​​responded.

That said, I want to share that I am new to the community and here in Brazil I need to work at least twice as hard as people from first world countries, because everything is harder to achieve here.

Now you must imagine the following:

When I see a question in the community, I need to copy the question and translate it on Google Translate into my Portuguese language.

After translating, I understand what the user needs and see if I can help in any way.

When I already know about the subject, I answer in Portuguese, translate it into English and post it in the community. But before posting, I need to align each answer in its place and adjust the excerpts, especially when there is a code suggestion.

This has been both exciting and challenging, because I end up learning along with it and being useful in some way, even though it takes a lot of time out of my day to do it.

I created an initial greeting template in a WORD document to avoid writing the same greeting every time, and guess what happened with that? The N8N system ended up blocking some posts for having the same template. It was amazing that this happened, because it is to avoid automatic responses, so I learned that I need to customize each response individually. I am grateful to learn from this and be able to grow together with the N8N community.

Please note that I am not paid for this and I do not have a job in programming at the moment, but I continue studying, learning, being useful, because I am sure that I will get a job or contract to be useful to some company, people and get paid in dollars or euros.

About your problem, I have never experienced it in real life, but I took the opportunity to study your case and tested it in some simulated situations, having the feedback that it could be implemented. That said, I will further study and present some suggestions that you can test and see if it makes sense for your problem.

Test in a separate workflow to validate the suggestion.

Thanks @octionic and @BramKn , have a great day

Maybe someone else has an answer to this?

Hi @octionic

I tried wrapping the code in a try-catch block. It does output an error, but I haven’t figured out how to force it to return a specific error message.

try {
  let llm = await this.getInputConnectionData('ai_languageModel', 0);
  
  // test logic for interrupting node execution
  if (true) {
    throw new Error("Here is the error message");
  }
    
  return llm;
} catch (error) {
  msg = error.message
  console.log(msg)
  return [{ json: msg }];
}

You can see the error in the console log:

However, the Basic LLM Chain node itself just outputs a generic internal error:

I’m not sure if this helps, but I wanted to share it in case it’s useful.

I have been there already. It doesn’t solve the problem.