GriswoldLabs
Home Projects Apps Blog Hire Me About
Home Projects Apps Blog Hire Me About
← All Tags

Tagged: ollama

1 post

January 29, 2026
ollama ai unraid docker self-hosted

Running Ollama on Unraid for Local AI Inference

Set up local LLM inference on your Unraid server with Ollama. CPU-only setup, model selection, API usage, and integration with development tools.

© 2026 GriswoldLabs. Built with Astro.