Return tool response as-is without additional formatting with OpenAI model AI Agent

Project Overview

I want to build an AI agent that can access 2 tools:

  • General chat
  • Document generation

Each tools is an another workflow that being called in this main workflow. For this tool, i use HTTP request into my already exist server. I use external server so that i can combine with my own data and knowledge Retrieval-Augmented-Generation (RAG). I communicate with my own server using JWT token as authorization.

The general chat tool is working great and have same expected result.

Problem

The output from document generation tool is a docx that being sent from my server as base64. However, in this current workflow document generation tool output is being passed again into LLM model that makes AI need to answer a very long base64 string.

How I Think It Can Be Fixed

I’ve thought of a couple of approaches to improve this:

  • Direct response handling:
    I’d love to return the tool’s response exactly as it is, without any additional formatting from the OpenAI model. This means the response should go straight to the webhook node after a tool call.

  • Direct return from sub-workflow:
    Return the document generation output directly to the webhook node, skipping any extra layers of processing.

I’d really appreciate any insights or suggestions you might have. Has anyone faced a similar issue, and how did you solve it?

Thanks a lot for your help!

Main Workflow

Sub-workflow : Document Generation

Sub-workflow : General Chat

Information on my n8n setup

  • n8n version: 1.72.1
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Windows 11 64-bit

Hello @rega To return the document generation tool’s output directly without sending it back through OpenAI:

  1. Add a Function node after your “Execute Workflow” node that calls the document generation:
// Check if this is a document generation response
if ($input.item.json.contentType === 'application/vnd.openxmlformats-officedocument.wordprocessingml.document' || 
    $input.item.json.data?.includes('base64')) {
  
  // Set flag and keep raw response
  return {
    json: {
      isDocumentResponse: true,
      rawResponse: $input.item.json,
      // No need to send back to OpenAI
      toolResponse: "Document successfully generated. Returning file directly to user."
    }
  };
}

// Otherwise, proceed with normal processing
return $input.item;

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.