Date of Award:

5-2010

Document Type:

Dissertation

Degree Name:

Doctor of Philosophy (PhD)

Department:

Electrical and Computer Engineering

Advisor/Chair:

Jacob H. Gunther

Abstract

Neural network training algorithms have always suffered from the problem of local minima. The advent of natural gradient algorithms promised to overcome this shortcoming by finding better local minima. However, they require additional training parameters and computational overhead. By using a new formulation for the natural gradient, an algorithm is described that uses less memory and processing time than previous algorithms with comparable performance.

Checksum

a1e5beb98a6695b26dc907be41bbac94

Share

COinS