Date of Award:

5-2010

Document Type:

Dissertation

Degree Name:

Doctor of Philosophy (PhD)

Department:

Electrical and Computer Engineering

Advisor/Chair:

Dr. Jacob H. Gunther

Abstract

Neural network training algorithms have always suffered from the problem of local minima. The advent of natural gradient algorithms promised to overcome this shortcoming by finding better local minima. However, they require additional training parameters and computational overhead. By using a new formulation for the natural gradient, an algorithm is described that uses less memory and processing time than previous algorithms with comparable performance.

Share

COinS