The Real AI Bottleneck No One Talks About

AI Doesn’t Need More GPUs — It Needs Faster Wires

$NVIDIA(NVDA)$  

$Astera Labs, Inc.(ALAB)$  

$Marvell Technology(MRVL)$  

Over the past three years, whenever artificial intelligence comes up, most people immediately picture the same thing: NVIDIA's H100 GPUs forming massive black walls of compute power, followed by the Blackwell architecture's B200 chips, and then the Rubin platform. Investment bank reports compete to outdo each other on who has the most exaggerated claims about compute capacity, parameter counts, or teraflops per second.

But here's the key question few people really consider: If you bring these expensive GPUs home, can they actually run at 100% utilization? Or do they spend most of their time idling, waiting for data to arrive?

In the game of AI infrastructure, the most critical upgrade has already quietly happened.

Looking back at 2023 and 2024, the dominant theme was brute-force stockpiling of hardware. Large models were just exploding onto the scene, and everyone was in a wild, grassroots phase. The logic was simple: Whoever had the most GPUs and the highest FLOPs won. It was a pure arms race—crudely put, bigger and stronger cards dominated.

But now, at the end of 2025, the direction has completely shifted.

As large language models have surged to trillions of parameters, and mixture-of-experts models have become widespread, we've hit a harsh reality: No matter how powerful a single GPU is, it's essentially an isolated island.

Today's AI is no longer about lone superheroes fighting solo. Instead, thousands of GPUs must collaborate like a single logical brain.

In this architecture, if you focus only on buying the strongest GPUs while ignoring the connections between them, you'll end up with an absurd scenario: Tens of thousands of dollars' worth of GPUs sitting there idling and spinning uselessly.

Why? Because data crawls too slowly through copper wires, and the GPUs spend most of their time waiting for it.

So, from 2025 to 2026, the main battlefield for AI investment has changed. Raw compute power is no longer the deciding factor. What truly creates differentiation is interconnect.

This is the era where interconnect reigns supreme.

In this new battlefield, two companies have essentially seized control of the entire AI industry's lifeline:

One is Marvell Technology $Marvell Technology(MRVL)$  

The other is Astera Labs$Astera Labs, Inc.(ALAB)$  

Why do many AI stocks tell grand stories but deliver far less profit than expected?

Why has interconnect—an invisible component—become the highest-margin part of AI infrastructure?

And most importantly, before investing: With Marvell and Astera, are you buying short-term explosive growth or long-term certainty?

To understand this war, we need to grasp why data transmission suddenly became the bottleneck.

First, recall the traditional approach in data centers: Electrical signals travel through copper wires—whether PCB traces on motherboards or copper cables in racks. This method has been used for decades: cheap, mature, and reliable.

But physics has an unavoidable law called the skin effect. As signal speeds increase and frequencies rise, current stops flowing evenly through the copper's core and crowds toward the outer "skin."

In AI data centers, this is very real. During large model training, GPUs constantly exchange intermediate results, gradients, and parameters across cards, nodes, and even racks. Larger models mean more frequent data swaps and ever-higher transmission speeds.

At these high frequencies, current squeezes to the surface, narrowing the effective path. It's like a six-lane highway suddenly forcing all traffic into the emergency lane—congestion builds, signal attenuation worsens, error rates climb, and GPUs pause to wait.

In AI, this isn't just slowdown; it's wasted compute and money burning away.

Remember: Higher frequency makes copper harder to use reliably.

The trouble starts here. Beyond the narrowing path, high speeds cause a second, deadlier issue: signal distortion.

Think of it like racing: At PCIe 4.0 speeds, minor road bumps are tolerable. By PCIe 5.0, speed doubles, and small imperfections cause wobbles—but experienced drivers recover. At PCIe 6.0 (64 GT/s), tiny flaws amplify into catastrophes: Reflections, crosstalk, and impedance mismatches turn clean signals into distorted mush.

The receiver sees ambiguous bits, forcing retries and error correction. Ironically, faster physical transmission can result in slower effective throughput due to constant corrections.

Engineers now fear not slow speeds, but speeds so fast that data "lies."

This is why, post-PCIe 6.0, copper faces existential challenges—not just distance, but fidelity.

Recent evolution shows the desperation: PCIe 4.0 signals traveled ~50 cm easily on motherboards. PCIe 5.0 halved that to ~20-25 cm, requiring pricier materials. PCIe 6.0 limits standard traces to 12-17 cm without aids—roughly an iPhone's length.

Data transmission has become the most expensive resource in AI hardware.

Faced with this physics dead end, the industry has two paths:

Option A: Extend copper's life with signal conditioning (retimers).

Option B: Switch fully to optical interconnects.

(Note: We're focusing on intra-rack distances here, not inter-rack where optics already dominate.)

Let's break down both approaches to see what the market is really betting on.

Option A: Signal Conditioning – Extending Copper's Life

This is Astera Labs' domain. The idea is intuitive: Since signals degrade over copper, place "service stations" (retimer chips) along the path to clean, reshape, amplify, and retime them.

Retimers act like pit stops, refreshing signals for the next leg.

Astera's business model is straightforward—and borderline protection-racket-like: If you want to keep using cheap copper, pay us to fight physics for you. To push copper through PCIe 6.0 or even 7.0, hand over the toll.

The harder copper becomes, the more essential retimers are—and the more Astera earns.

In Q3 2025, Astera reported ~76% gross margins. Achieving that in hardware usually means you're not selling chips—you're selling survival.

Astera dominates because:

It pioneered PCIe 6.0 retimers (Aries series) with production-ready chips while competitors were still prototyping.

Deep ties with NVIDIA: Reference designs for HGX H100/H200 and Blackwell platforms default to Astera chips.

Estimates suggest Astera holds >80% share in mainstream AI servers—making it a de facto standard.

Beyond basic retimers, Astera has two more growth engines:

Taurus (Active Electrical Cables – AEC): Embeds mini-retimers into cable connectors, turning passive copper cables into active, self-healing ones. This extends reach dramatically while retimers and Taurus work in relay. Traditional cable makers can't do this—it's chip-design territory. Astera packages everything (chip, board, firmware, cooling) into modules, enabling second-tier factories to compete with giants. Taurus is Astera's fastest-growing segment.

Scorpio (PCIe Switches): Acts as a smart hub when CPUs lack enough PCIe lanes for 8-10 GPUs plus NICs/storage. Scorpio routes and prioritizes traffic. By mid-2025, it contributed >10% of revenue and grew fastest, evolving Astera from signal fixer to traffic controller.

Astera's hidden moat: COSMOS software. Embedded telemetry in every chip allows real-time monitoring of millions of links—predicting failures, pinpointing issues. Hyperscalers like AWS/Microsoft integrate it deeply; switching vendors risks losing this "god-mode" visibility and cluster downtime.

Astera isn't just selling chips—it's selling a full system for reliable, non-crashing AI data centers.

Results show: In late 2025, revenue surged >100% YoY, profits neared $100M. With Blackwell ramping, Astera cashes in as long as GPUs ship.

But there's a shadow: Optical interconnects loom as copper's potential endgame.

Retimers extended copper beautifully through PCIe 6.0—perhaps even 7.0—but PCIe 7.0 (128 GT/s, spec finalized 2025) pushes skin effect to absurdity. Copper becomes a fragile tightrope; minor flaws cause disasters. Costs explode (boards packed with retimers + heatsinks), killing cost-effectiveness.

When patching a dying tech gets too expensive, markets ask: Keep propping it up or switch tracks?

That's where Option B enters: Optical Interconnects.

This is Marvell's stronghold.

While Astera squeezes every inch from copper (collecting tolls), Marvell bets on reshaping data centers: Copper will fade; photons win.

Marvell transformed under CEO Matt Murphy: Sold low-margin consumer businesses, acquired Infinera (core DSP tech) and Cavium (networking/compute). By late 2025, data center revenue >70%, growing ~80%.

Marvell's sharpest weapon: Optical DSP chips, dominating >60% market share for high-speed optics.

As rates climb (400G → 800G → 1.6T), DSP value rises—it's the "brain" ensuring clean light-signal recovery amid noise/distortion.

Marvell is diversified: Competes in copper retimers/AECs, holds switch seats (e.g., AWS), and builds custom ASICs (backing AWS Trainium, Google TPUs).

This "full-stack" lets Marvell bundle entire systems—reducing customer risk/headaches vs. Astera's narrower focus.

In December 2025, Marvell acquired Celestial AI for $3.25B (up to $5.5B with milestones). Celestial's Photonic Fabric brings optics directly near/inside chips—eliminating long copper runs inside racks.

Data converts to light almost immediately, flying at light speed with minimal power/loss. This directly threatens Astera's copper-extension business.

Marvell expects meaningful revenue from this ~2028+, but it's positioned: One hand on today's cash cows (optics + copper), the other on tomorrow's crown (near-chip optics).

Astera's story: Helping copper survive longer.

Marvell's story: Betting on a copper-free future.

Now, the investor question: Where to put money?

These companies compete in the same arena but have very different stock personalities.

Astera Labs (ALAB): Pure, expensive AI lottery ticket. Valuations >30x sales demand rocket-like 50%+ annual growth with no mistakes. You're betting aggressive: Massive AI hardware ramps continue, copper hangs on stubbornly. High risk/high reward—one weak margin quarter could tank the stock.

Marvell (MRVL): Trades ~6-10x sales (legacy businesses drag), but data center push toward 80%+ will likely trigger re-rating. Buying Marvell feels like an AI infrastructure index fund: Holds optics, copper, custom chips. Slower upside but sleeps-well-at-night stability.

Risks:

Astera: Hyperscalers (AWS, NVIDIA, Google) could internalize retimers/integrate into GPUs/CPUs—semiconductor giants hate paying high margins on critical parts long-term.

Marvell: Big acquisition bets—if photonics landing/yields fail, write-offs hurt.

Timeline guess:

2026: Copper's last hurrah—Blackwell full ramp, intra-rack mostly copper. Likely Astera's peak year.

2027: Inflection—PCIe 7.0 exposes copper limits clearly.

2028+: Photonics era—if near-chip optics land, Marvell's second curve ignites.

So, what kind of investor are you?

Aggressive, high-volatility tolerant, chasing short bursts? Astera fits—but watch photonics acceleration closely and be ready to exit.

Prefer steady, long-term compounding without daily heart attacks? Marvell as core holding—waiting for the true intra-rack optical world in 2028+.

This wraps my AI hardware series. I've tried my best across multiple pieces to break down this massive, noisy, complex space clearly—from NVIDIA's compute dominance to hidden storage battles, TPU vs. GPU rivalries, and finally these invisible wires tying it all together.

Of course, limited by my knowledge—plenty of room for improvement. Thanks for reading.

@TigerObserver  @Daily_Discussion  @Tiger_comments  @TigerPM  @TigerStars  

# 💰Stocks to watch today?(19 Dec)

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment6

  • Top
  • Latest
  • Ah_Meng
    ·12-21 10:57
    TOP
    Clever girl 🧒… thanks for sharing… I had $Marvell Technology(MRVL)$ long time ago but had sold and stop following it long time ago too. It’s great that you have again brought it back to my attention. It might be timely too especially with 2026 likely shaping up to be a meltdown year, it might be a good time to keep some cash on the side to start accumulating future growth. Thanks for sharing. A well researched piece! I hope more people get to read this and research further to share the knowledge all around! @TigerStars Hopefully, you can view this article in a positive light for sharing with more readers. It’s a waste not to. @Shernice軒嬣 2000 , don’t think need another of your famous photo to create widespread euphoria [Tongue][Sly]? I will definitely read more into the tech… there are just so many bottlenecks in the data storage and delivery space. There’s also the cooling tech to look at too.
    Reply
    Report
    Fold Replies
    • Ah_MengReplying toShernice軒嬣 2000
      I guess you will feed me more when you have more research data [Tongue][Silence]
      12-21 18:19
      Reply
      Report
    • Ah_MengReplying toShernice軒嬣 2000
      Looking forward to it, photo or no photo [Chuckle][Evil]
      12-21 18:20
      Reply
      Report
    • Ah_MengReplying toShernice軒嬣 2000
      I know. That’s why I mentioned to pull some money to the side in case of market crash to come back in for it. I had followed $Marvell Technology(MRVL)$ long time ago and its valuation has run a bit too high for my re-entry comfort.
      12-21 18:15
      Reply
      Report
    View more 1 comments
  • MKTrader
    ·12-21 10:23
    thank you for writing this
    Reply
    Report