Capability
Custom Autograd Functions For Quantized Backward Passes
2 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
8-bit and 4-bit quantization enabling QLoRA fine-tuning.
Unique: Implements custom autograd functions that reconstruct intermediate values from quantization metadata during backward passes, avoiding full dequantization while maintaining numerical stability. Uses QuantState objects to track absmax factors and bit-widths, enabling efficient gradient computation through quantized layers.
vs others: Enables training through quantized layers without materializing full-precision intermediates, reducing memory footprint by 50-75% vs standard PyTorch autograd, while maintaining compatibility with gradient checkpointing and distributed training.