It can handle massive amounts of data for generative AI and high performance computing workloads, such as Tesla's neural network training for FSD.
That brings the GPU’s memory bandwidth to 4.8 terabytes per second, up from 3.35 terabytes per second on the H100, and its total memory capacity to 141GB up from the 80GB of its predecessor.
The first H200 chips will be released in Q2 of 2024. The chips are expected to cost tens of thousands each.