Every package below normally requires 30+ minutes of compilation from source. We build them against the exact Colab runtime so you don't have to.
Optimized Flash Attention 2 — the backbone of efficient transformer inference on A100s.
NVIDIA's differentiable rasterizer for 3D deep learning, prebuilt with CUDA support.
CUDA-accelerated mesh processing and voxel utilities for 3D pipeline work.
Flexible GEMM kernels optimized for mixed-precision workloads on Ampere GPUs.
Neural rendering components for reconstruction pipelines, ready to import.
3D math and geometry utilities with CUDA acceleration. No build step required.
Memory-efficient attention and transformer building blocks from Meta. Critical for running large models on limited VRAM.
| Spec | Details |
|---|---|
| GPU | NVIDIA A100 |
| Platform | Google Colab linux x86_64 |
| Python | 3.12 |
| CUDA | 12.6 |
| PyTorch | 2.9+ |
Pre-configured Colab notebooks that use MissingLink wheels out of the box. Paste your token, hit run, and start producing output — no setup, no debugging, no dependency chasing.
Generate high-quality 3D models (.glb) from images using Microsoft's TRELLIS.2 4B model. Batch processing with automatic MP4 previews and Google Drive resume support.
AI-powered image editing with Qwen. Transform images using natural language prompts — describe the change you want and let the model handle the rest.
More notebooks on the way — request one