Skip to content
/ fedproto Public
forked from yuetan031/FedProto

A prototype-based Federated Learning algorithm

Notifications You must be signed in to change notification settings

chq/fedproto

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FedProto: Federated Prototype Learning over Heterogeneous Devices

Implementation of the vanilla federated learning paper : Communication-Efficient Learning of Deep Networks from Decentralized Data.

Experiments are produced on MNIST, Fashion MNIST and CIFAR10 (both IID and non-IID). In case of non-IID, the data amongst the users can be split equally or unequally.

Since the purpose of these experiments are to illustrate the effectiveness of the federated learning paradigm, only simple models such as MLP and CNN are used.

Requirments

This code requires the following:

  • Python 3.6 or greater
  • PyTorch 1.6 or greater
  • Torchvision
  • Numpy 1.18.5

Data Preparation

Running the experiments

The baseline experiment trains the model in the conventional way.

  • To train the FedProto on MNIST with n=3, k=100 under statistical heterogeneous setting:
python federated_main.py --mode task_heter --dataset mnist --num_classes 10 --num_users 20 --ways 3 --shots 100 --stdev 2 --rounds 100 --train_shots_max 110 --ld 1
  • To train the FedProto on FEMNIST with n=4, k=100 under both statistical and model heterogeneous setting:
python federated_main.py --mode model_heter --dataset femnist --num_classes 62 --num_users 20 --ways 4 --shots 100 --stdev 2 --rounds 120 --train_shots_max 110 --ld 1
  • To train the FedProto on CIFAR10 with n=5, k=100 under statistical heterogeneous setting:
python federated_main.py --mode task_heter --dataset cifar10 --num_classes 10 --num_users 20 --ways 5 --shots 100 --stdev 2 --rounds 110 --train_shots_max 110 --ld 0.1

You can change the default values of other parameters to simulate different conditions. Refer to the options section.

Options

The default values for various paramters parsed to the experiment are given in options.py. Details are given some of those parameters:

  • --dataset: Default: 'mnist'. Options: 'mnist', 'femnist', 'cifar10'
  • --num_classes: Default: 10. Options: 10, 62, 10
  • --mode: Default: 'task_heter'. Options: 'task_heter', 'model_heter'
  • --seed: Random Seed. Default set to 1234.
  • --lr: Learning rate set to 0.01 by default.
  • --momentum: Learning rate set to 0.5 by default.
  • --local_bs: Local batch size set to 4 by default.
  • --verbose: Detailed log outputs. Activated by default, set to 0 to deactivate.

Federated Parameters

  • --mode: Default: 'task_heter'. Options: 'task_heter', 'model_heter'
  • --num_users:Number of users. Default is 20.
  • --ways: Average number of local classes. Default is 3.
  • --shots: Average number of samples for each local class. Default is 100.
  • --test_shots: Average number of test samples for each local class. Default is 15.
  • --ld: Weight of proto loss. Default is 1.
  • --stdev: Standard deviation. Default is 1.
  • --train_ep: Number of local training epochs in each user. Default is 1.

About

A prototype-based Federated Learning algorithm

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Shell 0.5%