Skip to content

Torch anderson

Reshniak, Viktor requested to merge torch_Anderson into master

The list of changes:

  • reimplemented utils/anderson_acceleration.py
  • removed modules/AccelerationModule.py:
    • in my opinion, it is unnecessary
    • moved all logic to modules/optimizers.py
  • updated modules/optimizers.py:
    • removed abstract Optimizers class
    • the training loop is now implemented only in FixedPointIteration class
    • DeterministicAcceleration now inherits from FixedPointIteration and just reimplements its accelerate method
    • acceleration is now performed every time parameters are updated and not just on every epoch (not sure about this)
    • added with torch.no_grad() logic to avoid memory leaking
    • history of updates is now collections.deque instead of list
Edited by Reshniak, Viktor

Merge request reports