Loading...
Run large language models locally. Pull, run, create, and manage AI models on your own hardware.
2 agents
Expert AI agent for managing local LLM models with Ollama — model selection, GPU/CPU configuration, quantization tradeoffs, and performance tuning for development workflows.
AI agent specialized in creating custom Ollama Modelfiles — system prompts, parameter tuning, template configuration, and adapter integration for domain-specific local AI models.