This is actually a very important debate for the entire AI semiconductor supply chain, not just memory stocks like

Micron Technology,

SanDisk,

Western Digital, and

Seagate Technology.


The key question is simple but very powerful:


> Does AI efficiency reduce hardware demand, or does it increase total usage?




Historically in tech, the answer is usually the second one.



---


What TurboQuant actually affects


From what analysts are saying, TurboQuant mainly:


Optimises KV cache


Improves inference efficiency


Reduces memory per query


Does NOT reduce training memory


Does NOT reduce HBM demand significantly


Mostly affects inference VRAM / system memory



So Morgan Stanley’s view makes sense: HBM (used in training GPUs) should not be heavily affected.


This means companies most exposed to HBM and AI training, especially Micron Technology, may actually be less impacted than the market fears.



---


The Jevons Paradox (very important here)


There is a famous economic concept called Jevons Paradox:


> When technology becomes more efficient, total consumption often increases, not decreases.




Examples:


More fuel-efficient cars → people drive more


Cheaper cloud computing → more software


Faster GPUs → more AI models


Cheaper storage → more data stored



So if inference becomes cheaper:


More AI agents


More queries


More applications


More edge AI devices


More data generation


More storage demand


More memory demand overall



So efficiency may increase total memory demand, not reduce it.



---


Memory demand actually comes from 3 different areas


Very important to separate:


Segment Memory Type Companies


AI Training HBM Micron

AI Inference DRAM Micron

Data Storage NAND / HDD SanDisk, WDC, Seagate

Data Centres SSD / HDD WDC, Seagate

Edge Devices NAND SanDisk



TurboQuant mainly affects inference memory efficiency, not training and not storage demand from data growth.


AI still generates massive data:


Logs


Video


Synthetic data


Training datasets


Model checkpoints


Agent memory


Enterprise data lakes



All these need storage, not just VRAM.


So companies like:


Western Digital


Seagate Technology


SanDisk



are more tied to data growth, not inference efficiency.



---


My view: Overreaction vs First Crack


I would frame it like this:


Short term


This could be:


Expectations too high


Capex very aggressive


Memory stocks ran too much


Any negative narrative triggers selloff



So short term, this looks more like positioning unwind / sentiment shift.


Long term


The real risks to memory demand are actually:


1. Custom AI chips with on-chip memory



2. Better model compression



3. More efficient architectures



4. Slower AI spending if economy weakens




Not just TurboQuant alone.



---


Big picture conclusion


I would summarise the situation like this:


If AI demand slows → memory stocks fall.

If AI becomes more efficient → AI spreads everywhere → memory demand increases.


So paradoxically:


> The biggest risk to memory is not efficiency.

The biggest risk is AI capex slowdown.




Personally, I would currently lean toward: This looks more like a scare than the end of the AI memory cycle.


The real thing to watch is:


HBM pricing


Nvidia shipments


Hyperscaler capex


NAND prices


Data centre buildouts



If those remain strong, then this drop may just be noise, not a structural crack.

# Micron, SNDK Selloff on TurboQuant: Overreaction or Time to Cool Down?

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment1

  • Top
  • Latest
  • ZhongRenChun
    ·03-27 18:47
    sora shutdown because GPU cost is too expensive.   they need a more efficient NPU that can reduce the cost of AI video. this is the only way to be economical. otherwise these AI video will continue to lose money or require paid only subscriptions.  the new Groq chip is 16x faster than GPU because its designed purely for AI. we need groq for video generation.
    Reply
    Report