Name:
Neural Anisotropy Directions
Description:
Analyzing the role of the network architecture in shaping the inductive bias of deep classifiers.
Professor — Lab:
Pascal FrossardSignal Processing Laboratory

Technical description:
In this work, we analyze the role of the network architecture in shaping the inductive bias of deep classifiers. To that end, we start by focusing on a very simple problem, i.e., classifying a class of linearly separable distributions, and show that, depending on the direction of the discriminative feature of the distribution, many state-of-the-art deep convolutional neural networks (CNNs) have a surprisingly hard time solving this simple task. We then define as neural anisotropy directions (NADs) the vectors that encapsulate the directional inductive bias of an architecture. These vectors, which are specific for each architecture and hence act as a signature, encode the preference of a network to separate the input data based on some particular features. We provide an efficient method to identify NADs for several CNN architectures and thus reveal their directional inductive biases. Furthermore, we show that, for the CIFAR-10 dataset, NADs characterize the features used by CNNs to discriminate between different classes.
Papers:
Project status:
inactive — entered showcase: 2021-01-27 — entry updated: 2024-03-21

Source code:
Lab Github - last commit: 2020-11-17
Code quality:
This project has not yet been evaluated by the C4DT Factory team. We will be happy to evaluate it upon request.
Project type:
Application
Programming language:
Python
License:
Apache-2.0