User Guide
Concepts and usage instructions for AI applications, inference, templates, model libraries, and images.
Reading Guide
- For deployment, see Quick Start.
- For GPU deployment, see NVIDIA/CUDA.
- Read in this order: AI Applications (including Application Templates) → AI Inference (including Inference Templates / AI Model Library) → AI Images.
Console Menu and Feature Overview
After entering Artificial Intelligence, the menu is divided into Applications, Inference, and Images:
- Applications: Application instances (Dify/OpenClaw/ComfyUI), application templates.
- Inference: Inference instances (Ollama), inference templates, inference model library.
- Images: Managed under "Artificial Intelligence" → "Images"; selected when creating templates.
Terminology
- Template: Resource configuration such as CPU/memory/GPU required to run an instance.
- Model Library: Reusable model resources for use by inference applications.
- Image: Container image for the instance runtime environment, determining functionality and version.
Typical Relationship
Instance = Image + Spec + (optional) Model.
🗃️ AI Applications
4 items
🗃️ AI Inference
3 items
📄️ AI Images
Container images for running AI applications. Managed in the console under Artificial Intelligence → Images; selected when creating templates. The LLM type must match the template.