Skip to content

Unofficial implementation of Switching from Adam to SGD optimization in PyTorch.

License

Notifications You must be signed in to change notification settings

Mrpatekful/swats

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Switching from Adam to SGD

Wilson et al. (2018) shows that "the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks."

"SWATS from Keskar & Socher (2017) a high-scoring paper by ICLR in 2018, a method proposed to automatically switch from Adam to SGD for better generalization performance. The idea of the algorithm itself is very simple. It uses Adam, which works well despite minimal tuning, but after learning until a certain stage, it is taken over by SGD."

Usage

Installing the package is straightforward with pip directly from this git repository or from pypi with either of the following commands.

pip install git+https://github.com/Mrpatekful/swats
pip install pytorch-swats

After installation SWATS can be used as any other torch.optim.Optimizer. The following code snippet serves as a simple overview of how to use the algorithm.

import swats

optimizer = swats.SWATS(model.parameters())
data_loader = torch.utils.data.DataLoader(...)

for epoch in range(10):
    for inputs, targets in data_loader:
        # deleting the stored grad values
        optimizer.zero_grad()

        outputs = model(inputs)
        loss = loss_fn(outputs, targets)
        loss.backward()

        # performing parameter update
        optimizer.step()

About

Unofficial implementation of Switching from Adam to SGD optimization in PyTorch.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages