Title

A Batch Variable Learning Rate Gradient Descent Algorithm with the Smoothing L1/2 Regularization for Takagi-Sugeno Models

Document Type

Article

Publication Title

IEEE Access

Abstract

A batch variable learning rate gradient descent algorithm is proposed to efficiently train a neuro-fuzzy network of zero-order Takagi-Sugeno inference systems. By using the advantages of regularization, the smoothing L_{1/2} regularization is utilized to find more appropriate sparse network. Combining the second-order information of the smoothing error function, a variable learning rate is chosen along the steep descent direction, which avoids line search procedure and may reduce the cost of computation. In order to appropriately adjust the Lipschitz constant of the smoothing error function in the learning rate, a new scheme is proposed by introducing a hyper-parameter. Also the article applies the modified secant equation for estimating the Lipschitz constant, which makes the algorithm greatly reduce the oscillating phenomenon and improve the robustness. Under appropriate assumptions, a convergent result of the proposed algorithm is also given. Simulation results for two identification and classification problems show that the proposed algorithm has better numerical performance and promotes the sparsity capability of the network, compared with the common batch gradient descent algorithm and a variable learning rate gradient-based algorithm.

First Page

100185

Last Page

100193

DOI

10.1109/ACCESS.2020.2997867

Publication Date

1-1-2020

This document is currently not available here.

Share

COinS