Right now, I’m training an AI on my Raspberry PI5. Now, you might be thinking, how is this possible? Well, I’m not using the typical transformer method. Usually, an AI is trained through a transformer, but I’m using a mix of two methods I invented, QLLK, which folds the compute size into processable bits for smaller end devices, and TT, Turbo Train, which optimizes models to train on smaller end devices. Right now, I am training Cynix, and AI, and I’m making it so it can be locally hosted, just like ollama models, meaning you will be able to use it for your N8N workflows on cloud, and self-hosted, free of charge! (or you can just use ollama)
The QLLK method is already on github if you’d like to test it out, TT isnt yet.