Commit 3d693ed4 authored by IamTao's avatar IamTao

update the readme.

parent ec0ae8a2
# CHOCO-SGD
The code repository for the main experiments in the paper [Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication](https://arxiv.org/abs/1902.00340) and [Decentralized Deep Learning with Arbitrary Communication Compression](https://arxiv.org/abs/1907.09356).
Code for the main experiments of the paper [Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication](https://arxiv.org/abs/1902.00340).
Please refer to the folders `convex_code` and `dl_code` for more details.
### Datasets and Setup
First you need to download datasets from LIBSVM library and convert them into pickle format. For that from
```
cd data
wget -t inf https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/epsilon_normalized.bz2
wget -t inf https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/rcv1_test.binary.bz2
cd ../code
python pickle_datasets.py
```
If you get memory error, you can leave rcv1 dataset in the sparse format, but this would slow down training time.
### Reproduce the results
# Reference
If you use the code, please cite the following papers:
For running experiments with the `epsilon` dataset
```
python experiment_epsilon_final.py final
@inproceedings{ksj2019choco,
author = {Anastasia Koloskova and Sebastian U. Stich and Martin Jaggi},
title = {Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication},
booktitle = {ICML 2019 - Proceedings of the 36th International Conference on Machine Learning},
url = {http://proceedings.mlr.press/v97/koloskova19a.html},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},
volume = {97},
pages = {3479--3487},
year = {2019}
}
```
# Reference
If you use this code, please cite the following [paper](http://proceedings.mlr.press/v97/koloskova19a.html):
@inproceedings{ksj2019choco,
author = {Anastasia Koloskova and Sebastian U. Stich and Martin Jaggi},
title = {Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication},
booktitle = {ICML 2019 - Proceedings of the 36th International Conference on Machine Learning},
url = {http://proceedings.mlr.press/v97/koloskova19a.html},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},
volume = {97},
pages = {3479--3487},
year = {2019}
}
and
```
@article{koloskova2019decentralized,
title={Decentralized Deep Learning with Arbitrary Communication Compression},
author={Koloskova, Anastasia and Lin, Tao and Stich, Sebastian U and Jaggi, Martin},
journal={arXiv preprint arXiv:1907.09356},
year={2019}
}
```
\ No newline at end of file
# CHOCO-SGD
Code for the main experiments of the paper [Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication](https://arxiv.org/abs/1902.00340).
### Datasets and Setup
First you need to download datasets from LIBSVM library and convert them into pickle format. For that from
```
cd data
wget -t inf https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/epsilon_normalized.bz2
wget -t inf https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/rcv1_test.binary.bz2
cd ../code
python pickle_datasets.py
```
If you get memory error, you can leave rcv1 dataset in the sparse format, but this would slow down training time.
### Reproduce the results
For running experiments with the `epsilon` dataset
```
python experiment_epsilon_final.py final
```
# Reference
If you use this code, please cite the following [paper](http://proceedings.mlr.press/v97/koloskova19a.html):
@inproceedings{ksj2019choco,
author = {Anastasia Koloskova and Sebastian U. Stich and Martin Jaggi},
title = {Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication},
booktitle = {ICML 2019 - Proceedings of the 36th International Conference on Machine Learning},
url = {http://proceedings.mlr.press/v97/koloskova19a.html},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},
volume = {97},
pages = {3479--3487},
year = {2019}
}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment