Developing an AI for smaller end platforms for use in Ollama

Right now, I’m training an AI on my Raspberry PI5. Now, you might be thinking, how is this possible? Well, I’m not using the typical transformer method. Usually, an AI is trained through a transformer, but I’m using a mix of two methods I invented, QLLK, which folds the compute size into processable bits for smaller end devices, and TT, Turbo Train, which optimizes models to train on smaller end devices. Right now, I am training Cynix, and AI, and I’m making it so it can be locally hosted, just like ollama models, meaning you will be able to use it for your N8N workflows on cloud, and self-hosted, free of charge! (or you can just use ollama)

The QLLK method is already on github if you’d like to test it out, TT isnt yet.

1 Like

Nice. Does it support tools calling? One problem I noticed with ollama is that it does not call up mcp client node despite that it’s been trained to use tools. Eg gpt oss 20b. However if I use the model with open router, it works calling up mcp tools. After doing some research it appeared to point to an inherent issue with ollama.

I’m gonna try to do it!