Name:
ColTraIn HBFP Training Emulator
Description:
Co-located deep neural network training and inference
Professor — Lab:
Babak FalsafiParallel Systems Architecture Lab
Contact:
Tao Lin

Home page:
ColTraIn HBFP Training Emulator
Technical description:
HBFP is a hybrid Block Floating-Point (BFP) - Floating-Point (FP) number representation for DNN training introduced by ColTraIn: Co-located DNN Training and Inference team of PARSA and MLO at EPFL. HBFP offers the best of both worlds: the high accuracy of floating-point at the superior hardware density of fixed-point by performing all dot products in BFP and other operations in FP32. For a wide variety of models, HBFP matches floating-point’s accuracy while enabling hardware implementations that deliver up to 8.5x higher throughput. This repository is for ongoing research on training DNNs with HBFP.
Blog posts:
Papers:
Project status:
inactive — entered showcase: 2021-05-27 — entry updated: 2024-03-15

Source code:
Lab Github - last commit: 2023-02-16
Code quality:
This project has not yet been evaluated by the C4DT Factory team. We will be happy to evaluate it upon request.
Project type:
Application
Programming language:
Python
License:
BSD-3-Clause