share_log

Nvidia AI Dominance Challenged By Asian Startups Betting On Energy-Efficient And Cost-Effective Silicon — 'Inference' And 'Training' Chips Emerge As Key Focus

Benzinga ·  23:37

In a bid to challenge $NVIDIA (NVDA.US)$'s AI dominance, Asian startups are developing more energy-efficient and cost-effective chips for specific artificial intelligence applications.

What Happened: Asian startups are challenging Nvidia's AI dominance by developing more energy-efficient and cost-effective chips for specific AI applications. These startups are targeting the gap in the market left by Nvidia's high energy consumption and bulky design, reported Nikkei Asia on Friday.

These startups are focusing on two types of AI chips: "inference" chips, used to operate existing AI models, and "training" chips, high-powered data-processing components used to develop new AI models.

While Nvidia's GPUs continue to dominate the AI landscape, the startups believe that their GPUs' high energy consumption and bulky design leave a gap in the market that they can fill.

These startups believe that Nvidia's GPUs, while powerful, are too energy-intensive and expensive for many applications. Preferred Networks (PFN) CEO Toru Nishikawa stated, "No one has come up with the perfect chip architecture for inference." PFN is developing chips that aim to be more efficient and less costly than Nvidia's offerings.

Nvidia's GPUs are primarily used for training AI models, but their high cost and energy consumption make them impractical for devices like laptops and wearables. Analysts, including Kazuhiro Sugiyama from Omdia, believe that the demand for on-device AI will rise, encouraging new entrants to the market.

Startups such as Edgecortix, led by Sakyasingha Dasgupta, are focusing on solving issues like the "memory wall" problem to create more streamlined and energy-efficient AI chips. These efforts are part of a broader strategy to cater to the growing demand for AI in industrial applications and robotics, particularly in Asia, according to the report.

"Nvidia's GPU is mainly suited for training, but we are seeing more newcomers developing chips which can target both training and inference," Sugiyama said.

Other companies entering the market include U.S.-based SambaNova Systems, backed by $SoftBank Group (9984.JP)$'s Vision Fund; Tenstorrent, founded by a former $Intel (INTC.US)$ engineer; and the British company Graphcore, recently acquired by $SoftBank (94345.JP)$.

Big tech companies like $Alphabet-C (GOOG.US)$, $Meta Platforms (META.US)$, and $Amazon (AMZN.US)$ Web Services are also joining in, along with Nvidia's rival $Advanced Micro Devices (AMD.US)$.

Why It Matters: The competition between Nvidia and emerging Asian startups is heating up as the AI chip market continues to expand. Recently, Eric Schmidt, former CEO of Google, highlighted Nvidia as a major player in the AI sector, noting that large tech companies are planning significant investments in Nvidia-based AI data centers, potentially costing up to $300 billion.

Meanwhile, SoftBank has faced setbacks in its efforts to rival Nvidia with its own AI chip production. Negotiations with Intel reportedly fell through due to Intel's inability to meet production demands, leading SoftBank to turn to $Taiwan Semiconductor (TSM.US)$, a key Nvidia supplier.

This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote

Disclaimer: This content is for informational and educational purposes only and does not constitute a recommendation or endorsement of any specific investment or investment strategy. Read more
    Write a comment