Date of Award:

5-2010

Document Type:

Dissertation

Degree Name:

Doctor of Philosophy (PhD)

Department:

Electrical and Computer Engineering

Committee Chair(s)

Jacob H. Gunther

Committee

Jacob H. Gunther

Committee

Todd K. Moon

Committee

YangQuan Chen

Committee

Wei Ren

Committee

Donald Cooley

Abstract

Neural network training algorithms have always suffered from the problem of local minima. The advent of natural gradient algorithms promised to overcome this shortcoming by finding better local minima. However, they require additional training parameters and computational overhead. By using a new formulation for the natural gradient, an algorithm is described that uses less memory and processing time than previous algorithms with comparable performance.

Checksum

b6bceb03b4dde605214baecaa99811a9

Share

COinS