karpathy / llm.c
LLM training in simple, raw C/CUDA
See what the GitHub community is most excited about today.
LLM training in simple, raw C/CUDA
NCCL Tests
CUDA Library Samples
Instant neural graphics primitives: lightning fast NeRF and more
cuGraph - RAPIDS Graph Analytics Library
RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
how to optimize some algorithm in cuda.
Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruction.
Flash Attention in ~100 lines of CUDA (forward pass only)
🎉CUDA 笔记 / 大模型手撕CUDA / C++笔记,更新随缘: flash_attn、sgemm、sgemv、warp reduce、block reduce、dot product、elementwise、softmax、layernorm、rmsnorm、hist etc.
CUDA accelerated rasterization of gaussian splatting
A CUDA tutorial to make people learn CUDA program from 0
Causal depthwise conv1d in CUDA, with a PyTorch interface
CUDA Kernel Benchmarking Library