Template | Description |
---|---|
vLLM | vLLM is a fast and easy-to-use library for LLM inference and serving. |
Ubuntu Noble Numbat | Ubuntu 24.04 LTS “Noble Numbat” is the latest Long Term Support (LTS) release from Canonical, launched on April 25, 2024. |
Ubuntu Jammy Jellyfish | Ubuntu 22.04 LTS “Jammy Jellyfish” is a Long Term Support (LTS) release of the Ubuntu operating system, launched on April 21, 2022. |
Ubuntu Focal Fossa | Ubuntu 20.04 LTS “Focal Fossa” is a Long Term Support (LTS) release of the Ubuntu operating system, launched on April 23, 2020. |
NVIDIA Triton | NVIDIA Triton Inference Server is an open-source software platform designed to streamline and standardize the deployment of AI models in production environments. |
Tensorflow | TensorFlow is an open-source platform developed by Google for building and deploying machine learning and deep learning models. |
Red Hat ubi9 | Red Hat Universal Base Image 9 (UBI 9) is a freely redistributable, Open Container Initiative (OCI)-compliant base operating system image provided by Red Hat. |
Ray | Ray is an open source unified framework for scaling AI and Python applications. It provides a simple, universal API for building distributed applications that can scale from a laptop to a cluster. |
Pytorch | PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. |
Oobabooga’s Web UI | Oobabooga is a simple web UI for interacting with the Open Source models. |
Open WebUI | Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution. |
Apache MXNet | Apache MXNet is an open-source deep learning framework designed for both efficiency and flexibility, with a focus on scalability across multiple CPUs and GPUs. |
MLFlow | MLflow is an open-source platform for managing the end-to-end machine learning lifecycle, encompassing tracking experiments, packaging code, managing models, and deploying them, all in a reproducible and collaborative manner. |
Blender Kasm | Blender Kasm refers to the integration of Blender, a powerful open-source 3D creation suite, within Kasm Workspaces, a container streaming platform. This setup allows users to access Blender through a web browser, eliminating the need for local installations and enabling remote 3D modeling and animation work. |
Juice Labs Agent | Juice is GPU-over-IP: a software application that routes GPU workloads over standard networking, creating a client-server model where virtual remote GPU capacity is provided from Server machines that have physical GPUs (GPU Hosts) to Client machines that are running GPU-hungry applications (Application Hosts). This template allows users to add external GPUs to their existing pools. |
Hugging Face Transformers | Hugging Face Transformers is a widely-used open-source library that provides easy access to state-of-the-art natural language processing (NLP) and generative AI models, including models for text, vision, audio, and multimodal tasks. |
Fedora | Fedora is a community-driven Linux distribution sponsored by Red Hat. |
Cuda Devel Ubuntu | ”cuda-devel” in Ubuntu refers to the package containing the development files for the NVIDIA CUDA Toolkit. This package provides the necessary headers and libraries for compiling CUDA applications. |
ComfyUI | ComfyUI is an open-source, node-based graphical user interface (GUI) designed for creating and managing complex workflows in generative AI models, particularly Stable Diffusion. |
Selecting the Instance
Finding a Template
Choosing a Template
Configuring the Template
${VARIABLE_NAME}
.${TOKEN}
and ${PW}
. Fill out the Key and Value, and click ‘Add’ to associate the variables with your template.Configuration: You may edit the YAML in the template directly. Examples might include changing the image from latest
to another version. Keep in mind, changing the YAML makes the template custom and GPU Trader doesn’t gurantee it will work.Save Template: If you make changes to the template that you wish to save for future use, click ‘Save Template’ to save it as a custom template.
Confirming the Template Deployed