MAX AI & GPU kernels
What it is: Modular’s MAX AI documentation for LayoutTensor, linalg, nn, and GPU kernel authoring.
Why read it:
- Shows how Mojo maps to CUDA-style grids, warps, and shared memory without forcing you to write CUDA.
- Documents MAX kernel APIs so you can extend transformer blocks, convolutions, and quantization flows.
- Provides guidance on LayoutTensor transformations, which are mandatory for serious GPU work.
Tips before you click:
- Read the LayoutTensor overview first; everything else builds on it.
- Keep the linalg and nn pages handy when you need building blocks for AI workloads.
- Look for the kernel cookbooks-they make great starting templates.