I'm trying to build a self-learning AI sales manager

Hey everyone,

I’m looking for some help with a project, and I’m hoping someone out there might have tackled something similar before. I’m trying to build a self-learning sales manager.

Here’s what I’ve got so far: an AI chatbot workflow that sells loan brokerage services.

My main questions are:

  1. How can I use analytics to capture and store information about successful sales steps?
  2. What’s the best way to implement AI-driven hypothesis testing within this setup?
  3. What kind of analytics data do you think is essential to collect and store for this purpose?
  4. How can I get the AI to select from a database of successful questions/answers while still continuing to test new approaches?

I think this is a pretty interesting challenge, and I’d be really grateful for any pointers you can offer! Thanks in advance!

Hello @maslennikov.ig,
please for the next time try to ask in Help me Build my Workflow, there is the perfect space to get some inspiration.

Btw, here some hints on my side:

  1. after each AI step or key interaction, store metrics like response content, response time, user replies, and conversion status in a database (e.g. PostgreSQL, Google Sheets, Airtable). This gives you structured analytics for every “sale” event. The template AI Data Analyst Chatbot shows how to pull and store data for later review

  2. generate variations using LLMs: e.g. “Try messaging approach A vs B for this job.” Log outcomes, then retrain or let AI select the best path. You can mimic “A/B testing” by storing results and feeding them into an LLM to recommend the top performer.

  3. track timestamps, message type, AI prompt used, time to response, user sentiment, and conversation length. This data feeds decision-making models and is foundational for predictive analytics

You can also build a loop: fetch past best, test one new idea, record results, update your “library”, all within n8n using database nodes + AI + conditional branching.

2 Likes

Thanks a ton for the clear explanation! That definitely helped me get a much better handle on things.

However, one question still lingers for me. How would you go about getting the AI to propose new hypotheses itself based on the existing analytical data? Could you elaborate on that a bit more?

Thanks again!

In practice, you can make sure that every night, using a Schedule Trigger node, the system retrieves interactions with very high performance and those with low performance from the database (for example PostgreSQL, Google Sheets or Airtable). Subsequently send these metrics to the LLM with a request such as “Here’s what worked and what didn’t: propose 2-3 new approaches to experiment”.

This idea is inspired by the concepts of “hypothesis generation” used in data analysis and experimentation contexts, where an AI model extracts patterns from datasets and suggests new hypotheses.

Then you save these hypotheses as “candidates” and put them in rotation in your bot, monitoring the relative metrics, if they work, they become part of the “best library”, if they are not discarded. You can then restart the night loop, feeding the LLM with the weak results of the previous hypotheses to make it generate even more refined proposals. Some scientific frameworks work just like this, iterating model generation and verification.

Here is an article talks about.

Also consider as an initial revision of the hypotheses to use the Evaluations in n8n for this purpose. In this way, in the pre-production phase, you can validate the best hypotheses based on what you think is optimal and correct what the model is returning incorrectly.

if you think this answer is complete and meets your mark as solution.
Give a big help to other users and encourage supporters.

3 Likes