Expanding an adaptive learning-rate algorithm to handle mini-batch training
Resilient backpropagation, Rprop, is a robust and accurate optimization method used in neural network training with batch-learning. As a result of its adaptive step sizes, Rprop requires copious amounts of data at each iteration which slows it down when dealing with large datasets, compared with mini-batch methods. We create and empirically evaluate a version of Rprop, S-Rprop, which can handle mi
