Hey everyone, I’m looking for someone who can physically or remotely set up a Mac Mini M4 for 24/7 local AI operations. This is a paid one-time project, $500 fixed.
What needs to be set up:
macOS hardened (FileVault, firewall, dedicated user account)
Ollama running and tested with at least one model
n8n self-hosted, running as a persistent service that survives reboots
AnythingLLM connected to local Ollama
ChromaDB for vector memory
Telegram bot integration via n8n
One end-to-end workflow test confirming everything talks to each other
Deliverable is a working machine + written setup summary.
Location: Denver, CO. On-site preferred, but remote screen share works if you’re confident in Mac-based local AI setups.
DM me or reply here if you’ve done something similar. GitHub or homelab examples welcome.
I haven’t worked with macOS specifically for this kind of setup, but the stack itself (Ollama, ChromaDB, n8n as a service, AnythingLLM) is all stuff I’ve configured and debugged before on Linux. Happy to do it via remote screen share.
DM me if you want to chat - I’m in CET timezone, flexible on hours.
Email: [email protected]
Anton saw the portfolio impressive n8n work. Since you mentioned your experience is primarily Linux/VPS, I have two specific ‘Mac-side’ questions to ensure this stays 24/7 stable:
Persistence: macOS doesn’t use systemd. How do you plan to ensure n8n and ChromaDB survive a reboot and run as background services without an active user session logged in?
M4 Performance: Since this isn’t a VPS with shared vCPUs, how will you ensure Ollama leverages the M4’s Unified Memory/GPU efficiently without interfering with the macOS window server?
Connectivity: I’m in MST (7 hours behind you). Are you comfortable with a handoff/sync window between 1 PM and 5 PM MST?
If you’re confident you can translate your Linux stack to a hardened macOS environment, let’s talk.
I can remotely set up your Mac Mini M4 for a robust, 24/7 local AI operation. I have extensive experience self-hosting n8n and integrating it with local LLM stacks, ensuring everything is persistent and survives reboots.
How I will handle your setup:
Hardening & Persistence: I’ll configure macOS security (FileVault/Firewall) and set up n8n and Ollama as background services (using launchd or Docker) so they auto-start on boot.
The AI Stack: I’ll deploy Ollama, connect it to AnythingLLM, and spin up ChromaDB as your vector store. I’ll ensure the M4’s Unified Memory is properly utilized for optimal inference speeds.
Telegram Integration: I’ll build a “Heartbeat” workflow in n8n that connects your Telegram bot to the local Ollama instance, confirming the end-to-end data flow.
Documentation: You’ll receive a Written Setup Summary with all local endpoints, service commands, and a “Quick Restart” guide.
Why me: I’m a developer focused on production-grade automations. I don’t just “install” apps; I build systems that stay online. I’m comfortable working via remote screen share and can adjust to your Denver timezone for the session.
Anton, saw the portfolio—impressive n8n work. Since your experience is primarily Linux/VPS, I have two specific ‘Mac-side’ questions to ensure this stays 24/7 stable:
Persistence: macOS doesn’t use systemd. For a headless Mac Mini, how do you plan to ensure n8n and ChromaDB survive a reboot and run as background services without an active user session logged in?
Hardware Optimization: How will you ensure Ollama is correctly leveraging the M4’s Unified Memory/GPU (Metal) rather than just hitting the CPU?
Scheduling: I’m in MST (7 hours behind you). Note that I am unavailable on Mondays, Wednesdays, and Fridays between 5 AM and 1 PM local time. Does that sync window work for you?
If you’re confident you can translate your Linux stack to a hardened macOS environment, let’s talk
Available for remote screen share. I work with n8n + Telegram bots + AI APIs daily in production.
On your specific concerns:
macOS persistence: launchd plist for Ollama, Docker Desktop with restart policy for n8n — no systemd needed
M4 optimization: Ollama uses Metal acceleration natively. With 16GB RAM, a 7-8B model runs comfortably alongside n8n and ChromaDB. I’ll tune OLLAMA_MAX_LOADED_MODELS and context window based on your actual RAM
Docker networking: Ollama runs natively on macOS, n8n in Docker — connection via host.docker.internal:11434. I’ve dealt with this exact setup before
Ollama native install + model pull + Metal verification
n8n + ChromaDB via docker-compose (persistent, auto-restart)
AnythingLLM → local Ollama + ChromaDB
Telegram bot workflow in n8n (message → Ollama → reply)
End-to-end test + written documentation
I can prepare the full docker-compose.yml, launchd config, and n8n workflow JSON in advance — so the session is just execution, not figuring things out live.
Available evenings/weekends your time (MST). Ready to start this weekend.
We run exactly this stack in production — Ollama (qwen2.5-coder:14b + 8 other models) + n8n + ChromaDB + Telegram bot on our own infrastructure.
Our setup includes SSH tunnel for remote Ollama access from cloud services, so n8n workflows hit localhost:11434 seamlessly whether running locally or from remote nodes. We handle macOS hardening, persistent services (launchd), and end-to-end workflow testing as standard practice.
No need to be in Denver — our team has done identical setups remotely via screen share. We can configure everything: FileVault, Ollama with your preferred models, n8n as a persistent service, AnythingLLM connected to local inference, ChromaDB for vector storage, and the Telegram bot workflow.
This is a clean, well-defined project — exactly how I like them.
I’ve set up this exact stack before: n8n self-hosted running as a launchd service on macOS (survives reboots, auto-restarts on crash), Ollama with model management, AnythingLLM pointed at local Ollama endpoints, ChromaDB for persistent vector memory, and Telegram bot trigger via n8n webhook.
For macOS hardening I typically handle FileVault, firewall rules, dedicated service user, and SSH key-only access.
$500 fixed is fair for the scope. I’m based in the US (CST), happy to do remote setup via screenshare or async if you prefer.