You can blend two images and use result as a new training example
The core idea for this post is actually from 2017, and it is very unexpected and impressive.
Hobbyist TL;DR;s for seemingly important ML papers
The core idea for this post is actually from 2017, and it is very unexpected and impressive.
Commonly, RNNs are used to solve a specific task (e.g. how to play Go in AlphaGo), requiring a lot of data (billions of self-play matches). Ultimately, we’d like a NN, that can quickly learn new tasks (e.g. Chess) with just a few examples by using experience with other tasks, like people do.
The paper brings 2-order optimization method to a comparable training speed with 1-order counterparts (SGD, ADAM). 2-order optimization method appears to be superior in terms of the final model performance, as tested on CIFAR and ImageNet using ResNet and VGG-f, and did not require hyperparameter tuning.