Tutorials

Practical guides for running AI locally. Tested on consumer hardware.

Coming soon

Running Ollama on Windows - no cloud needed

Install, configure, and run large language models on your own machine. Step by step, nothing skipped.

Ollama Windows LLM
Coming soon

Getting started with Open WebUI

A ChatGPT-style interface for your local models. How to set it up and actually use it day-to-day.

Open WebUI Docker Local AI
Coming soon

Local image generation with Stable Diffusion

Generate images on your GPU without paying per prompt. Setup, models, and what hardware you actually need.

Stable Diffusion ComfyUI GPU
Coming soon

Setting up a local coding assistant

No API key required. Run a code-completion model locally and wire it into VS Code.

VS Code Ollama Continue