Login / Register
Advancing the rate of AI innovation
HBM3E built for AI and supercomputing with industry-leading process technology
Frequently asked questions
Micron’s HBM3E 8-high 24GB and HBM3E 12-high 36GB deliver industry-leading performance with bandwidth greater than 1.2 TB/s and consume 30% less power than any other competitor in the market.
Micron HBM3E 8-high 24GB will ship in NVIDIA H200 Tensor Core GPUs starting in the second calendar quarter 2024. Micron HBM3E 12-high 36GB samples are available now.
Micron’s HBM3E 8-high and 12-high modules deliver an industry-leading pin speed of greater than 9.2Gbps and can support backward-compatible data rates of HBM2 first-generation devices.
Micron’s HBM3E 8-high and 12-high solutions deliver an industry-leading bandwidth of more than 1.2 TB/s per placement. HBM3E has 1024 IO pins and the HBM3E pin speed of greater than 9.2Gbps achieves a rate higher than 1.2TB/s.
Micron’s industry-leading HBM3E 8-high provides 24GB capacity per placement. The recently announced Micron HBM3E 12-high cube will deliver a jaw-dropping 36GB of capacity per placement.
Micron’s HBM3E 8-high and 12-high solutions deliver an industry-leading bandwidth of greater than 1.2 TB/s per placement. HBM3E has 1024 IO pins and the HBM3E pin speed of more than 9.2Gbps and achieves a rate greater than 1.2TB/s.
HBM2 offers 8 independent channels running at 3.6Gbps per pin and delivering up to 410GB/s bandwidth and comes in 4GB, 8GB and 16GB capacity. HBM3E offers 16 independent channels and 32 pseudo channels. Micron’s HBM3E delivers pin speed greater than 9.2Gbps at an industry-leading bandwidth of more than 1.2 TB/s per placement. Micron’s HBM3E offers 24GB capacity using an 8-high stack and 36GB capacity using a 12-high stack. Micron’s HBM3E delivers 30% lower power consumption than competitors.
Please see our Product Brief.
Featured resources
1. Data rate testing estimates based on shmoo plot of pin speed performed in manufacturing test environment.
2. 50% more capacity for same stack height.
3. Power and performance estimates based on simulation results of workload uses cases.
4. Based on internal Micron model referencing an ACM Publication, as compared to the current shipping platform (H100).
5. Based on internal Micron model referencing Bernstein’s research report, NVIDIA (NVDA): A bottoms-up approach to sizing the ChatGPT opportunity, February 27, 2023, as compared to the current shipping platform (H100).
6. Based on system measurements using commercially available H100 platform and linear extrapolation.