Capability
Matrix Multiplication With Quantized Operands Gemm Operations
4 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “efficient quantization support (8-bit and 4-bit) for memory-constrained deployment”
Google's open-weight model family from 1B to 27B parameters.
Unique: Officially validated quantization support across multiple frameworks (bitsandbytes, GPTQ, AWQ) with published quality benchmarks, enabling developers to choose quantization strategy based on deployment constraints without custom optimization work
vs others: Achieves better quality/speed tradeoffs with 4-bit quantization than Llama 2 due to training-aware quantization considerations, and simpler to deploy than custom quantization schemes or model distillation approaches