AMD’s recently discussed patent for “High-bandwidth DIMM” (HB-DIMM) memory has sparked interest online, but this isn’t a brand-new technology. The patent that’s making headlines is actually just an update to work AMD began back in 2022.
The patent describes a way to double the speed of DDR5 memory from 6.4 Gbps to 12.8 Gbps without needing faster memory chips. Instead, it uses special buffer chips and something called “pseudo-channels” to handle more data at once.
What many reports miss is that this technology has already evolved beyond AMD’s patent. The memory industry, through its standards organization JEDEC, has combined AMD’s ideas with similar work from Intel and SK hynix to create a standard called MRDIMM (Multiplexed-Rank Dual Inline Memory Modules).
“This stuff has been talked about for a few years already,” notes Tom’s Hardware, explaining that the recent patent filing is likely just “bureaucratic housekeeping” to protect AMD’s intellectual property.
The standardized version, MRDIMM, is already shipping and being used with Intel’s newest Xeon 6 server processors. Testing by Phoronix shows these modules deliver modest overall performance gains but significant improvements for certain memory-intensive computing tasks.
These advanced memory modules don’t come cheap, though. Current MRDIMMs cost 28% to 114% more per gigabyte than standard DDR5 memory. For a server with 8-16 memory channels, this price difference adds up quickly.
AMD is expected to support this standardized memory technology in its upcoming server processors. At AMD’s Advancing AI event in June, CEO Lisa Su hinted at memory bandwidth reaching 1.6 terabytes per second for future EPYC “Venice” processors. This aligns perfectly with what second-generation MRDIMMs running at 12,800 Mbps could deliver across 16 memory channels.
Similar Posts
While server adoption is happening now, every day computers likely won’t see this technology soon. New memory standards need support from CPU makers, motherboard manufacturers, and chipset designers. The PC industry typically follows JEDEC standards, making it unlikely that AMD would pursue a proprietary memory format just for consumer computers.
The patent highlights growing memory bandwidth needs, especially for AI and graphics processing. “Modern computing platforms have ever greater memory bandwidth requirements,” the filing states, noting that current improvements aren’t keeping pace with what’s needed for high-performance applications.
Despite the patent’s technical merits, AMD’s stock actually dropped slightly when the news circulated. Market analysts note this reflects broader concerns about server product demand rather than the memory technology itself.
For consumers, the most practical impact may come from AMD’s APUs (chips that combine CPU and graphics), where reports suggest an HB‑DIMM PHY path could be added alongside DDR5 to prioritize bandwidth for AI and graphics tasks. This could improve performance for on-device AI processing where bursts of high bandwidth matter most.
While doubling memory bandwidth sounds impressive, remember that real-world benefits depend on specific tasks and overall system design. Most everyday computing isn’t limited by memory bandwidth, which is why this technology is targeting specialized server and AI applications first.