SK hynix HBM capacity hits zero as Big Tech offers fab funding
SK hynix told customers its high-bandwidth memory output is sold out and that Big Tech buyers are offering to fund new fabs and ASML lithography purchases.

SK hynix’s high-bandwidth memory output is “essentially zero” available, a person familiar with the situation told Reuters, with several Big Tech buyers offering to fund new fabrication lines and ASML lithography purchases to lock in supply for AI accelerators through the end of the decade.
Demand from clients seeking three-year HBM contracts “far exceeds our production capacity,” Ki Tae Kim, the South Korean memory maker’s head of HBM sales and marketing, said in remarks reported by Reuters and corroborated by Seoul Economic Daily. SK hynix holds 57 per cent of the high-bandwidth memory market and ranks second in DRAM with a 32 per cent share, according to Counterpoint figures cited in the reporting.
The company posted a record first quarter in late April, with operating margin reaching 72 per cent on the back of HBM pricing that doubled between successive quarters. Management signalled stack prices may climb a further 63 per cent this quarter as buyers compete for the limited capacity available before new fabs come online. Shares of SK hynix closed at 1,686,000 won on Friday, up 1.93 per cent, trading just below the 52-week high of 1,689,000 won.
Customer-funded fabs
Some hyperscale buyers are now offering to underwrite dedicated production lines and to help finance purchases of ASML’s extreme ultraviolet lithography systems, the high-end machines required for advanced HBM stacks, according to Reuters and Seoul Economic Daily. Proposals reportedly include direct funding tied to the Yongin semiconductor cluster’s first fab, known as Y1, alongside backing for additional ASML EUV gear acquisitions. Each EUV system costs in the order of hundreds of millions of dollars, and the manufacturer’s order book is already booked years out.
The arrangements would represent a structural shift in how memory capacity is built. Memory makers historically funded their own fabs against forecast demand, a model that delivered cyclical busts when the cycle turned. Customer-funded capacity transfers part of the demand-side risk back to the buyers. The pattern echoes other moves in the AI supply chain, where Nvidia’s equity stakes in chip and infrastructure partners have crossed 40 billion dollars and drawn scrutiny over circular vendor relationships, and where Broadcom’s custom-chip programme with OpenAI has stalled at 18 billion dollars in financing.
Years to break the bottleneck
SK Group chairman Chey Tae-won warned in March that the wafer crunch could “drag on until 2030” and that ramping wafer output to meet AI demand would take “four to five years.” The comments preceded the first-quarter print and were reinforced by Kim’s HBM remarks at the company’s earnings call. HSBC analyst Kazunori Ito said in a note quoted by Reuters that “waiting for market conditions to improve is not a viable option” for buyers, and that fab expansion timelines of at least one year mean near-term capacity additions cannot match the AI demand curve.
HBM stacks DRAM dies vertically and connects them with through-silicon vias, providing the data throughput that AI accelerators require. The product line sits at the centre of the AI data centre supply chain, with shipments tied directly to Nvidia GPU output and to procurement at hyperscale operators including Microsoft, Google and Meta. SK hynix’s other DRAM and enterprise SSD lines are also reporting tight supply, with server module prices rising on the same squeeze.
Peers chase qualification
Samsung Electronics and Micron Technology, the two other major HBM suppliers, are pushing to qualify successor generations of stacks for Nvidia’s accelerator roadmap. Micron’s shares surged 38 per cent in a week earlier this month on the same AI memory tailwind, and TSMC reported its April sales jumped 17.5 per cent on AI chip demand. Samsung is widely expected to win greater share of next-generation HBM4 allocations as Nvidia diversifies its memory sourcing, although the Korean rival’s qualification programme is running behind SK hynix.
What the company has not said
SK hynix has not publicly identified the Big Tech buyers offering to fund fabs, and the company’s stated capital expenditure framework remains internally driven. The Yongin Y1 fab is on a pilot-production timeline within the next year, with mass output not scheduled before 2027. Until then, allocation decisions on HBM3E and the forthcoming HBM4 generation will determine the pecking order among Nvidia, AMD and the hyperscale custom-silicon programmes drawing on the same memory pool.
For SK hynix, the order book lined up at the door is the kind of problem most chip executives rarely face. For its customers, the calculation is plain. Capacity that does not exist cannot be bought.
Sources
- SK hynix AI Memory Crunch: Big Tech Offers to Fund Fabs as Capacity Runs Out, TS2, May 9, 2026
- Reuters reporting via TS2: customer-funded fab proposals, Ki Tae Kim quote, capacity comment
- Seoul Economic Daily reporting via TS2: Yongin Y1 funding proposals and ASML EUV financing
- Counterpoint Research: HBM and DRAM market share figures
Kai Mendel
Technology editor covering fintech, AI and the platform economy. Reports from San Francisco.
Related

Samsung crosses $1tn market value as AI memory boom rewrites Asia chip order

Arm beats on Q4 but stumbles on AGI CPU supply as $2bn data center orders pile up

Broadcom slides 4% as OpenAI custom chip financing stalls at $18bn

AMD beats on Q1 as data-center revenue jumps 57%, lifting AI chip rally
