Event Decay Neural Networks
Code for the paper "EDeNN: Event Decay Neural Networks for low latency vision".
⚠️ Refactor in progress - current code is for reference. Restructuring and docker images on the way this week
Overview
Brings the characteristics of convolutions to event cameras, while acting on the event stream without a loss of information like traditional convolution-bsed networks.
Installation
-
Clone this repository
git clone https://gitlab.surrey.ac.uk/cw0071/edenn.git cd ./edenn/
-
Set up conda environment
conda env create -f environment.yml conda activate edenn pip install -e .
For reproducing the other benchmarks, install SLAYER manually:
# pip install submodules/slayerPytorch/ pip install ./slayerpytorch/
-
Download dataset(s)
# Train set (48.2 GB download, 138 GB extracted) wget "http://rpg.ifi.uzh.ch/data/snn_angular_velocity/dataset/train.tar.zst" -O ./data/train.tar.zst zstd -vd ./data/train.tar.zst rm ./data/train.tar.zst tar -xvf ./data/train.tar -C ./data/ rm ./data/train.tar # Validation set (2.7 GB download, 7.7 GB extracted) wget "http://rpg.ifi.uzh.ch/data/snn_angular_velocity/dataset/val.tar.zst" -O ./data/val.tar.zst zstd -vd ./data/val.tar.zst rm ./data/val.tar.zst tar -xvf ./data/val.tar -C ./data/ rm ./data/val.tar # Test set (2.6 GB download, 7.4 GB extracted) wget "http://rpg.ifi.uzh.ch/data/snn_angular_velocity/dataset/test.tar.zst" -O ./data/test.tar.zst zstd -vd ./data/test.tar.zst rm ./data/test.tar.zst tar -xvf ./data/test.tar -C ./data/ rm ./data/test.tar
Usage
Testing
Download model checkpoint here (0.0 GB).
./test.py checkpoint.ckpt --nolog
The parameters will be loaded from the model, but you can override them if you wish.
Usage:
usage: test.py [-h] [--limit_test LIMIT_TEST] [--nolog] [--overfit OVERFIT] checkpoint_path
positional arguments:
checkpoint_path Path to trained checkpoint (.ckpt)
options:
-h, --help Show this help message and exit
--limit_test LIMIT_TEST
Use this test set proportion (float) or batches (int) each epoch (still randomised over entire dataset) (default: None)
--nolog Don't log to wandb (default: False)
Reproducing SNN baseline
From the paper "Event-Based Angular Velocity Regression with Spiking Networks" (code)
Download their model weights:
wget "http://rpg.ifi.uzh.ch/data/snn_angular_velocity/models/pretrained.pt" -O cnn5-avgp-fc1.pt
Then run this test script:
./test_snn.py SNN_baseline cnn5-avgp-fc1.pt data/ --nolog
usage: test_snn.py [-h] [--limit_test LIMIT_TEST] [--nolog] [-a BATCH_ACCUMULATION] [-w WORKERS] name checkpoint_path dataset_path
positional arguments:
name Name of run
checkpoint_path Path to provided checkpoint (cnn5-avgp-fc1.pt)
dataset_path Dataset directory
options:
-h, --help show this help message and exit
--limit_test LIMIT_TEST
Use this test set proportion (float) or batches (int) each epoch (still randomised over entire dataset) (default: None)
--nolog Don't log to wandb (default: False)
-a BATCH_ACCUMULATION, --batch_accumulation BATCH_ACCUMULATION
Perform batch accumulation (default: 1)
-w WORKERS, --workers WORKERS
Dataset workers (can use 0) (default: 12)```
Training
Example:
./train.py EDeNN-reproduce AngularVelocity AngularVelocity --batch_size 8 --nolog
Usage:
usage: train.py [--optuna OPTUNA] [--seed SEED] [--lr LR] [--nolog] [--limit_train LIMIT_TRAIN] [--limit_val LIMIT_VAL] [--overfit OVERFIT] [--max_epochs MAX_EPOCHS] name MODEL: {AngularVelocity} DATASET: {AngularVelocity}
positional arguments:
name Name of run
MODEL: {AngularVelocity}
Model to train
DATASET: {AngularVelocity}
Dataset to train/val/test on
Trainer:
--optuna OPTUNA Optimise with optuna using this storage URL. Examples: 'sqlite:///optuna.db' or 'postgresql://postgres:password@host:5432/postgres' (default: None)
--seed SEED Use specified random seed for everything (default: None)
--lr LR Learning rate (default: 0.01)
--nolog Don't log to wandb (default: False)
--limit_train LIMIT_TRAIN
Use this train set proportion (float) or batches (int) each epoch (still randomised over entire dataset) (default: None)
--limit_val LIMIT_VAL
Use this val set proportion (float) or batches (int) each epoch (still randomised over entire dataset) (default: None)
--overfit OVERFIT Overfit to this proportion (float) or batches (int), use train set for val (default: 0.0)
--max_epochs MAX_EPOCHS
Maximum number of epochs (default: -1)