Navigate
Home How it works What's included Pricing FAQ
Back to home
Everything you need.
Nothing you don't.
Every wheel and notebook is compiled and tested specifically for the Google Colab A100 runtime — Python 3.12, CUDA 12.6, PyTorch 2.9+. One pip install gets you all of them. No compiling, no conflicts, no wasted GPU time.
Optimized CUDA wheels, ready to pip install.

Every package below normally requires 30+ minutes of compilation from source. We build them against the exact Colab runtime so you don't have to.

flash-attn 2.7.3

Optimized Flash Attention 2 — the backbone of efficient transformer inference on A100s.

🔺

nvdiffrast 0.4.0

NVIDIA's differentiable rasterizer for 3D deep learning, prebuilt with CUDA support.

🧊

cumesh & o-voxel

CUDA-accelerated mesh processing and voxel utilities for 3D pipeline work.

🧮

flex-gemm 1.0.0

Flexible GEMM kernels optimized for mixed-precision workloads on Ampere GPUs.

🎨

nvdiffrec-render

Neural rendering components for reconstruction pipelines, ready to import.

📐

utils3d 0.0.2

3D math and geometry utilities with CUDA acceleration. No build step required.

🔧

xformers

Memory-efficient attention and transformer building blocks from Meta. Critical for running large models on limited VRAM.

Supported environment

SpecDetails
GPUNVIDIA A100
PlatformGoogle Colab linux x86_64
Python3.12
CUDA12.6
PyTorch2.9+
Get generating instantly. Zero config.

Pre-configured Colab notebooks that use MissingLink wheels out of the box. Paste your token, hit run, and start producing output — no setup, no debugging, no dependency chasing.

More notebooks on the way — request one

Start free trial