May 7, 20251-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence SpeedAdam 预训练的 1-bit SGD 优化方法NoteLow-PrecisionQuantizationError CompensationOptimizerTheoreticalICML2021
May 7, 2025GaLore: Memory-Efficient LLM Training by Gradient Low-Rank ProjectionGaLore, 低秩空间中的梯度投影以及权重更新NoteLightweightLow-PrecisionOptimizerSVDTheoreticalICML2024
May 7, 2025MICROADAM: Accurate Adaptive Optimization with Low Space Overhead and Provable ConvergenceMicroAdam, 通过梯度稀疏化以及 error compensation 实现轻量的优化器NoteLightweightError CompensationQuantizationOptimizerTheoreticalNeurIPS2024
May 7, 2025Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank GradientsQ-GaLore, 对 GaLore 进一步施加低精度量化NoteLightweightLow-PrecisionOptimizerSVDEmpiricalArXiv2024