ColTraIn HBFP Training Emulator
Co-located deep neural network training and inference
HBFP is a hybrid Block Floating-Point (BFP) - Floating-Point (FP) number representation for DNN training introduced by ColTraIn: Co-located DNN Training and Inference team of PARSA and MLO at EPFL. HBFP offers the best of both worlds: the high accuracy of floating-point at the superior hardware density of fixed-point by performing all dot products in BFP and other operations in FP32. For a wide variety of models, HBFP matches floating-point’s accuracy while enabling hardware implementations that deliver up to 8.5x higher throughput. This repository is for ongoing research on training DNNs with HBFP.
inactive
—
entered showcase: 2021-05-27
—
entry updated: 2024-03-15
This project has not yet been evaluated by the C4DT Factory team.
We will be happy to evaluate it upon request.
Application
Python
BSD-3-Clause