My n8n mobile app streams groq responses faster than the native ChatGPT app

I connected Grok to an n8n workflow and streamed the response to my phone. It was faster than the native ChatGPT app.

That surprised me. This is just a webhook, a simple workflow, and a lightweight mobile app — and it outperforms a dedicated AI product on response speed.

The app is called ShellX.

Ask “What’s on my list today?” and get your calendar and tasks back in seconds — streamed, directly on your phone. That’s all this is: a thin UI layer that talks to your n8n webhooks. Nothing in between, no data stored.

Under the hood:

The speed comes from SSE streaming — responses appear the moment the model starts generating. The app sends a sessionId for memory and supports labels for routing, so one webhook can power multiple workflows.

The workflow in the screenshot is all you need to get started: a streaming webhook, an AI Agent with Groq, memory, and a couple of tool nodes.


What else you can do with it:

Talk to your workflows
Speak a question and get a streamed AI response in real-time — word by word, as the model generates it.

Route to different workflows
Label a message “Todo” and it routes to your task workflow. Label it “Agenda” and it hits your calendar. One webhook, multiple workflows — handled by a simple Switch node.

Get notified by your workflows
A workflow fails at 3AM? Get a push notification. An order comes in? Same.

Scan, snap, generate
Scan QR codes, send images to your workflows, or let your agent generate images and voice responses.

How are you handling mobile access to your workflows? Telegram bots? Dashboards? Something else? Would love to hear what’s working for you.