High Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-entry memory (SDRAM) initially from Samsung, AMD and SK Hynix. RAM in upcoming CPUs, and FPGAs and in some supercomputers (such because the NEC SX-Aurora TSUBASA and Fujitsu A64FX). HBM achieves larger bandwidth than DDR4 or GDDR5 while using less power, and in a substantially smaller type issue. This is achieved by stacking as much as eight DRAM dies and an non-compulsory base die which might embody buffer circuitry and check logic. The stack is often connected to the memory controller on a GPU or CPU by way of a substrate, corresponding to a silicon interposer. Alternatively, the memory die could possibly be stacked directly on the CPU or GPU chip. Throughout the stack the dies are vertically interconnected by via-silicon vias (TSVs) and microbumps. The HBM technology is similar in principle but incompatible with the Hybrid Memory Cube (HMC) interface developed by Micron Technology. HBM Memory Wave bus could be very extensive in comparison to other DRAM recollections comparable to DDR4 or GDDR5.
An HBM stack of four DRAM dies (4-Hello) has two 128-bit channels per die for a complete of 8 channels and a width of 1024 bits in complete. A graphics card/GPU with four 4-Hello HBM stacks would subsequently have a memory bus with a width of 4096 bits. As compared, the bus width of GDDR recollections is 32 bits, with sixteen channels for a graphics card with a 512-bit memory interface. HBM helps up to 4 GB per package. The bigger variety of connections to the memory, relative to DDR4 or GDDR5, required a brand new method of connecting the HBM Memory Wave Routine to the GPU (or different processor). AMD and Nvidia have each used function-constructed silicon chips, referred to as interposers, to connect the memory and GPU. This interposer has the added advantage of requiring the memory and processor to be physically close, lowering memory paths. Nevertheless, as semiconductor system fabrication is considerably more expensive than printed circuit board manufacture, this provides cost to the ultimate product.
The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into impartial channels. The channels are completely independent of one another and should not essentially synchronous to one another. The HBM DRAM uses a large-interface architecture to achieve high-speed, low-energy operation. Each channel interface maintains a 128-bit knowledge bus working at double data fee (DDR). HBM supports switch rates of 1 GT/s per pin (transferring 1 bit), yielding an overall bundle bandwidth of 128 GB/s. The second era of High Bandwidth Memory, HBM2, additionally specifies up to eight dies per stack and doubles pin transfer rates as much as 2 GT/s. Retaining 1024-bit extensive access, HBM2 is able to reach 256 GB/s memory bandwidth per bundle. The HBM2 spec allows up to eight GB per package. HBM2 is predicted to be particularly useful for efficiency-delicate shopper functions reminiscent of digital reality. On January 19, 2016, Samsung introduced early mass manufacturing of HBM2, at up to eight GB per stack.
In late 2018, JEDEC introduced an update to the HBM2 specification, providing for elevated bandwidth and capacities. Up to 307 GB/s per stack (2.5 Tbit/s effective information rate) is now supported within the official specification, though products operating at this velocity had already been available. Additionally, the replace added help for 12-Hi stacks (12 dies) making capacities of as much as 24 GB per stack potential. On March 20, 2019, Samsung introduced their Flashbolt HBM2E, featuring eight dies per stack, a transfer rate of 3.2 GT/s, providing a complete of 16 GB and 410 GB/s per stack. August 12, 2019, SK Hynix introduced their HBM2E, featuring eight dies per stack, a switch charge of 3.6 GT/s, providing a complete of 16 GB and 460 GB/s per stack. On July 2, 2020, SK Hynix announced that mass manufacturing has begun. In October 2019, Samsung announced their 12-layered HBM2E. In late 2020, Micron unveiled that the HBM2E customary would be up to date and alongside that they unveiled the subsequent standard known as HBMnext (later renamed to HBM3).

