Name:
mia
Description:
Library for running membership inference attacks (MIA) against machine learning models
Professor — Lab:
Carmela TroncosoSecurity and Privacy Engineering Laboratory

Technical description:
These are attacks against privacy of the training data. In MIA, an attacker tries to guess whether a given example was used during training of a target model or not, only by querying the model. See more in the paper by Shokri et al. Currently, you can use the library to evaluate the robustness of your Keras or PyTorch models to MIA.
Project status:
inactive — entered showcase: 2021-01-21 — entry updated: 2022-07-07

Source code:
Lab Github - last commit: 2021-10-20
Code quality:
This project has not yet been evaluated by the C4DT Factory team. We will be happy to evaluate it upon request.
Project type:
Application
Programming language:
Python
License:
MIT