ollama-setup
jeremylongshore/claude-code-plugins-plus-skills
This skill automates the setup and configuration of Ollama, allowing users to deploy Large Language Models (LLMs) locally. It performs system assessment (OS, GPU, RAM), selects optimal models based on hardware availability, and handles installation across macOS, Linux, and Docker. Use this when you require free, self-hosted AI alternatives, need offline inference capabilities, or wish to eliminate reliance on costly hosted API services.