HTTP Request node fails with "Invalid JSON" error when using Expression

Hello,

I am trying to call the Google Gemini API using the HTTP Request node. When I build the request body using an expression, it keeps failing with the error “JSON parameter needs to be valid JSON”, even though the expression is correct.

I am on the n8n.cloud trial version.

Here is my workflow setup:

  • Node: HTTP Request
  • Method: POST
  • URL: https://generativelanguage.googleapis.com/...
  • Body Field: Set to “Expression”

This is the code in my Expression Editor:

JavaScript={{ { "contents": [ { "parts": [ { "text": You are an expert SEO analyst… (and the rest of the prompt) } ] } ] } }}

I have already confirmed the API key is correct and tried building the body in raw text mode as well, but the validator fails. It seems like the expression is not being evaluated before the JSON validation runs. Can you please help?


remove the = and try again.

If not working. We might need your full text here.

I’m also having this issue and I do not have an equal sign anywhere.

The two nodes are identical with the exception of one uses an expression and the other doesn’t.

  • Without expression
    {
        "breakpointLocation": "NONE",
        "browserLog": false,
        "closeCookieModals": false,
        "debugLog": false,
        "downloadCss": false,
        "downloadMedia": false,
        "excludes": [
            {
                "glob": "/**/*.{png,jpg,jpeg,pdf}"
            }
        ],
        "headless": true,
        "ignoreCorsAndCsp": false,
        "ignoreSslErrors": false,
        "injectJQuery": true,
        "keepUrlFragments": false,
        "linkSelector": "a[href]",
        "pageFunction": "// The function accepts a single argument: the \"context\" object.\n// For a complete list of its properties and functions,\n// see https://apify.com/apify/web-scraper#page-function \nasync function pageFunction(context) {\n    // This statement works as a breakpoint when you're trying to debug your code. Works only with Run mode: DEVELOPMENT!\n    // debugger; \n\n    // jQuery is handy for finding DOM elements and extracting data from them.\n    // To use it, make sure to enable the \"Inject jQuery\" option.\n    const $ = context.jQuery;\n    const pageTitle = $('title').first().text();\n\n    // Define a selector string for all the elements you want to remove\n    const elementsToRemove = `nav, footer, script, style, noscript, svg, img[src^=\"data:\"], [role=\"alert\"], [role=\"banner\"], [role=\"dialog\"], [role=\"alertdialog\"], [role=\"region\"], [aria-label*=\"skip\"], [aria-modal=\"true\"]`;\n\n    const fullBodyText = $('body')\n    .clone()                   // clone to prevent side effects\n    .find(elementsToRemove)  // remove elements that aren't user-facing\n    .remove()\n    .end()\n    .text()\n    .replace(/\\s+/g, ' ')      // clean excessive whitespace\n    .trim();\n\n    // Print some information to Actor log\n    context.log.info(`URL: ${context.request.url}, TITLE: ${pageTitle}`);\n\n    // Return an object with the data extracted from the page.\n    // It will be stored to the resulting dataset.\n    return {\n        url: context.request.url,\n        fullBodyText,\n        pageTitle\n    };\n}",
        "postNavigationHooks": "// We need to return array of (possibly async) functions here.\n// The functions accept a single argument: the \"crawlingContext\" object.\n[\n    async (crawlingContext) => {\n        // ...\n    },\n]",
        "preNavigationHooks": "// We need to return array of (possibly async) functions here.\n// The functions accept two arguments: the \"crawlingContext\" object\n// and \"gotoOptions\".\n[\n    async (crawlingContext, gotoOptions) => {\n        // ...\n    },\n]\n",
        "proxyConfiguration": {
            "useApifyProxy": true
        },
        "respectRobotsTxtFile": true,
        "runMode": "PRODUCTION",
        "startUrls": [
            {
                "url": "http://teamncw.com",
                "method": "GET"
            }
        ],
        "useChrome": false,
        "waitUntil": [
            "networkidle2"
        ]
    }
    
  • With Expression
    {
      "breakpointLocation": "NONE",
      "browserLog": false,
      "closeCookieModals": false,
      "debugLog": false,
      "downloadCss": false,
      "downloadMedia": false,
      "excludes": [
          {
              "glob": "/**/*.{png,jpg,jpeg,pdf}"
          }
      ],
      "headless": true,
      "ignoreCorsAndCsp": false,
      "ignoreSslErrors": false,
      "injectJQuery": true,
      "keepUrlFragments": false,
      "linkSelector": "a[href]",
      "pageFunction": "// The function accepts a single argument: the \"context\" object.\n// For a complete list of its properties and functions,\n// see https://apify.com/apify/web-scraper#page-function \nasync function pageFunction(context) {\n    // This statement works as a breakpoint when you're trying to debug your code. Works only with Run mode: DEVELOPMENT!\n    // debugger; \n\n    // jQuery is handy for finding DOM elements and extracting data from them.\n    // To use it, make sure to enable the \"Inject jQuery\" option.\n    const $ = context.jQuery;\n    const pageTitle = $('title').first().text();\n\n    // Define a selector string for all the elements you want to remove\n    const elementsToRemove = `nav, footer, script, style, noscript, svg, img[src^=\"data:\"], [role=\"alert\"], [role=\"banner\"], [role=\"dialog\"], [role=\"alertdialog\"], [role=\"region\"], [aria-label*=\"skip\"], [aria-modal=\"true\"]`;\n\n    const fullBodyText = $('body')\n    .clone()                   // clone to prevent side effects\n    .find(elementsToRemove)  // remove elements that aren't user-facing\n    .remove()\n    .end()\n    .text()\n    .replace(/\\s+/g, ' ')      // clean excessive whitespace\n    .trim();\n\n    // Print some information to Actor log\n    context.log.info(`URL: ${context.request.url}, TITLE: ${pageTitle}`);\n\n    // Return an object with the data extracted from the page.\n    // It will be stored to the resulting dataset.\n    return {\n        url: context.request.url,\n        fullBodyText,\n        pageTitle\n    };\n}",
      "postNavigationHooks": "// We need to return array of (possibly async) functions here.\n// The functions accept a single argument: the \"crawlingContext\" object.\n[\n    async (crawlingContext) => {\n        // ...\n    },\n]",
      "preNavigationHooks": "// We need to return array of (possibly async) functions here.\n// The functions accept two arguments: the \"crawlingContext\" object\n// and \"gotoOptions\".\n[\n    async (crawlingContext, gotoOptions) => {\n        // ...\n    },\n]\n",
      "proxyConfiguration": {
          "useApifyProxy": true
      },
      "respectRobotsTxtFile": true,
      "runMode": "PRODUCTION",
      "startUrls": [
          {
              "url": "http://{{ $json.domain }}",
              "method": "GET"
          }
      ],
      "useChrome": false,
      "waitUntil": [
          "networkidle2"
      ]
    

}

Here’s the workflow, didn’t realize I can copy and paste the JSON directly in here.

@automatForge try to wrap your long strings (especially pageFunction) strings in JSON.stringify and see if that helps.

1 Like

I tried this before and it didn’t work.

I did it again just now because I had nothing to lose and it worked… :person_facepalming:

Good old turn it off and on again I suppose.

Thanks for the help @jabbson!

@Noah_Croydon You might benefit from the same fix. Does your HTTP request have any long strings?