Why http request change my key?


Authorization → authorization
How i can fix this?

Hi @Trung_Nguy_n Welcome to the n8n community :tada:

Is it breaking something? or just works?

It causes an error, because the system calling the request is Authorization

Hi @Trung_Nguy_n can you give us some more informations about your system?

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:


Information on your n8n setup

  • **n8n version:1.3
  • **Database (default: SQLite): PostgreSQL
  • **n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • **Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • **Operating system: Linux

Thank you!

I was able to reproduce it in version 1.8.1. Everything that is passed in the “header parameters” is written in lower case. But in version 1.8.1 there is (also in version 1.7.1 for sure, earlier I don’t know for sure), the possibility of header authentication via n8n directly. Then the credentials are also stored securely and not in a single node.
This works for me then also that in the header is written correctly upper/lower case.

Oh i see Header Parameters Name : Accept → accpet
giống với trường hợp của tôi. Tối khai báo Name :Authorization → authorization

Yes correct, everything entered within the HTTP node under the header parameters is automatically written in lower case.

But if you don’t use the header parameters in the node itself but the “Generic Credental Type” and there the “Header Auth” it works. Then the value is not written in lower case letters


Thank. I have fixed the error

Hi @Trung_Nguy_n and @seljo-ch ,

I think I had the similar problem with yours. I am trying to scrape the content from the link below:


I found that I need to pass in “User-Agent” header with GET http operation when I try to scrape that webpage link with n8n, but then I get 403 error. I suspected that there shall be some mistakes in Header Auth as well.

(Ps: I’m using self-hosted n8n version 1.1.1)

Here is the settings I made on n8n, but it returns 403 forbidden error:

Below are the script I used to get the content from the webpage:

Python script:

import requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36'}
response = requests.get("https://www.bursamarketplace.com/index.php?tpl=themkt_securities", headers=headers)

Curl script:

curl 'https://www.bursamarketplace.com/index.php?tpl=themkt_securities' \
  -H 'authority: www.bursamarketplace.com' \
  -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \
  -H 'accept-language: en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7' \
  -H 'cache-control: max-age=0' \
  -H 'cookie: _fbp=fb.1.1691572616961.1804299937; isntp=0; _hjSessionUser_1831611=eyJpZCI6IjMzMGI5NTYwLWYyZDctNTk1ZS1iNzBkLTJlOGVjOTNiNjQyZiIsImNyZWF0ZWQiOjE2OTQxODk4NjMzMzQsImV4aXN0aW5nIjp0cnVlfQ==; _ga_FYDZQRCTTC=GS1.1.1694189862.1.1.1694189889.33.0.0; _fbc=fb.1.1694424851162.IwAR33H57lKL-zjDkuCjghUrRwNETGSIywXlbIRWIIY3WKIw-UrDiWURRYyr0; _ga_FC21QWPZV0=GS1.1.1695183465.2.0.1695183469.0.0.0; PHPSESSID=v2l8f0ljlqt15k8vuvghkh1nhc; _cfuvid=JYVPDqyuyu1o.F5im_AYlGnaRTvaw1QXdODUAv_0B5E-1696135983109-0-604800000; _ga=GA1.2.1460426012.1689233709; _ga_QVB3G9YQBJ=GS1.2.1696135984.15.1.1696139234.44.0.0; __cf_bm=V.JqCSRpKosPlkB22odQReGqnlECrATRl3gJnz9tBgk-1696139517-0-AUcIR8OCx5TTbRopzMvIoxKTGqx9SPMo47ZbO6WIxgcjDtK9EAP1DwAsbHEhZd8S+9ltTSt5Bhwd1nuXKPOAAvs=' \
  -H 'sec-ch-ua: "Google Chrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"' \
  -H 'sec-ch-ua-mobile: ?0' \
  -H 'sec-ch-ua-platform: "Windows"' \
  -H 'sec-fetch-dest: document' \
  -H 'sec-fetch-mode: navigate' \
  -H 'sec-fetch-site: none' \
  -H 'sec-fetch-user: ?1' \
  -H 'upgrade-insecure-requests: 1' \
  -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36' \


Unfortunately, I don’t quite understand what you are trying to do. Do you want to download/scrape the content of the link with n8n? If so, the HTTP node probably won’t work on its own.

The issue was that certain API’s were case sensitive and only worked correctly if the header auth was submitted correctly case sensitive.

Hi @seljo-ch, thanks for the comment. It is solved here: Reddit - Dive into anything

Mainly, it’s because when I want to scrape the data, the header sent using n8n is different from header sent using python.

1 Like

I remembering answering that one on Reddit :slight_smile:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.