Ho to run data "through" LLM but using tools instead of exposing LLM to data

Hi,

I’m busy building a workflow that will at its heart have an LLM that will use OpenAI/other api. The data is rather sensitive (and large) and for this to be viable I cannot send it to the API for analysis (also use up any limits on the api). I cannot deploy a local LLM through Ollama as the machines I have aren’t strong enough.

We deal a lot with transaction data from clients. We want to run analysis on this data but give the client the freedom to query the data and not have a fixed report that we design that might not make sense to them, is not impactful enough and there is a sense of ownership when they design/query it.

The idea was to use the N8N workflow to give them a chat interface to query their data.

We were thinking of developing python driven analysis and building it over time, so store performance functions/tools. Then when the customer queries for store performance instead of the raw data going to OpenAI, a tool is called and the data analyzed locally, result anonymized and then the LLM maybe translates the results to simple English paragraphs.

Does this make sense? Or am I imagining something not possible?
If I have the LLM understand the tools at its disposal, call the tool does the raw data then actually flow through the LLM?

(I know there is debate on whether OpenAI and the like retain your data/train on your data, but irrespective clients wont agree).

Much appreciated.