EFFORT: A Comprehensive Technique to Tackle Timing Violations and Improve Energy Efficiency of Near-Threshold Tensor Processing Units

Document Type

Article

Journal/Book Title/Conference

IEEE Transactions on Very Large Scale Integration (VLSI) Systems

Volume

29

Issue

10

Publisher

Institute of Electrical and Electronics Engineers

Publication Date

10-1-2021

Funder

National Science Foundation

First Page

1790

Last Page

1799

Abstract

Modern deep neural network (DNN) applications demand a remarkable processing throughput usually unmet by traditional Von Neumann architectures. Consequently, hardware accelerators, comprising a sea of multiplier-and-accumulate (MAC) units, have recently gained prominence in accelerating DNN inference engine. For example, tensor processing units (TPUs) account for a lion's share of Google's datacenter inference operations. The proliferation of real-time DNN predictions is accompanied by a tremendous energy budget. In quest of trimming the energy footprint of DNN accelerators, we propose Energy eFFicient and errOr Resilient TPU (EFFORT) - an energy optimized, yet high-performance TPU architecture, operating at the near-threshold computing (NTC) region. EFFORT promotes a better-than-worst case design by operating the NTC TPU at a substantially high frequency while keeping the voltage at the NTC nominal value. In order to tackle the timing errors due to such aggressive operation, we employ an opportunistic error mitigation strategy. In addition, we implement an in situ clock gating architecture, drastically reducing the MACs' dynamic power consumption. Compared to a cutting-edge error mitigation technique for TPUs, EFFORT enables up to $2.5\times $ better performance at NTC with only 4% average accuracy drop across six out of eight DNN benchmarks.

This document is currently not available here.

Share

COinS