Name:
Trickster
Description:
Sophisticated attacks on Machine Learning models
Professor — Lab:
Carmela TroncosoSecurity and Privacy Engineering Laboratory

Technical description:
Trickster allows to attack machine learning models where the attack needs to be more sophisticated than simply adding some noise to an image. One example is to evade an abuse detection model, for example, social media bot detection, or a malware detector.
Documentation:
Trickster Docs
Papers:
Project status:
inactive — entered showcase: 2019-03-18 — entry updated: 2022-07-07

Source code:
Lab GitHub - last commit: 2019-04-18
Code quality:
This project has not yet been evaluated by the C4DT Factory team. We will be happy to evaluate it upon request.
Project type:
Library
Programming language:
Python
License:
MIT