Torch anderson
The list of changes:
- reimplemented utils/anderson_acceleration.py
- removed modules/AccelerationModule.py:
- in my opinion, it is unnecessary
- moved all logic to modules/optimizers.py
- updated modules/optimizers.py:
- removed abstract Optimizers class
- the training loop is now implemented only in FixedPointIteration class
- DeterministicAcceleration now inherits from FixedPointIteration and just reimplements its accelerate method
- acceleration is now performed every time parameters are updated and not just on every epoch (not sure about this)
- added with
torch.no_grad()
logic to avoid memory leaking - history of updates is now
collections.deque
instead oflist
Edited by Reshniak, Viktor