Account Info
Log Out
English
Back
Log in to access Online Inquiry
Back to the Top
Nvidia denies receiving DOJ subpoena, Can Chip Stocks Stabilize?
Views 1.9M Contents 493

Very insightful interview with a High-Ranking Former $Micros...

Very insightful interview with a High-Ranking Former $Microsoft (MSFT.US)$ employee responsible for AI on the AI chip space and its relation to $NVIDIA (NVDA.US)$ :

1. At the peak of COVID $NVIDIA (NVDA.US)$ said to $Microsoft (MSFT.US)$ that they had to get in line as their order book was so full. Even before that, $Microsoft (MSFT.US)$ went to $# $Intel (INTC.US)$ , which gave them a beautiful roadmap for AI accelerators but with no deadline. That was not acceptable for $Microsoft (MSFT.US)$ .

2. Because $Microsoft (MSFT.US)$ was concerned about being too dependent on $NVIDIA (NVDA.US)$ and not having enough supply, they went to $Advanced Micro Devices (AMD.US)$ . It was harder to work with $Advanced Micro Devices (AMD.US)$ because they do not have CUDA. They invested in a company called Lamini because they took CUDA and almost in real-time recompiled it to $Advanced Micro Devices (AMD.US)$ 's ROCm. About 90-95% of the CUDA code was portable.

3. At the start, $Microsoft (MSFT.US)$ also had some memory and overheating issues with $Advanced Micro Devices (AMD.US)$ 's MI300X.

4. Because of the struggle, the leadership at $Microsoft (MSFT.US)$ decided to build their own ASIC. They work closely with % $Taiwan Semiconductor (TSM.US)$ to finally have a backup.

5. When they started he was worried that $Microsoft (MSFT.US)$ 's Athena ASICs woudn't be on par and a great chip but he was wrong as it totally suprised him in a positive way.

6. $Microsoft (MSFT.US)$ helped OpenAI in understanding that not all of their clients will use $NVIDIA (NVDA.US)$ chips. According to him, OpenAI decided to work on a middleware in old client-server language. It's a middleware called Trident. The Trident will then manage the GPUs and the AI accelerators, whatever they are underneath for the compute.

7. He also sees great startups like Groq as a good alternative for inference workloads. He doesn't think hyperscalers will be as dependent on $NVIDIA (NVDA.US)$ going forward as they were in the past.
Very insightful interview with a High-Ranking Former $Microsoft (MSFT.US)$ employee responsible for AI on the AI chip space and its relation to $NVIDIA (NVDA.US...
Very insightful interview with a High-Ranking Former $Microsoft (MSFT.US)$ employee responsible for AI on the AI chip space and its relation to $NVIDIA (NVDA.US...
Very insightful interview with a High-Ranking Former $Microsoft (MSFT.US)$ employee responsible for AI on the AI chip space and its relation to $NVIDIA (NVDA.US...
Very insightful interview with a High-Ranking Former $Microsoft (MSFT.US)$ employee responsible for AI on the AI chip space and its relation to $NVIDIA (NVDA.US...
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only. Read more
7
1
+0
1
Translate
Report
12K Views
Comment
Sign in to post a comment
  • The real Honey bee : Obviously big companies are trying to make something that NVIDIA made so they are not dependable only on NVIDIA. They might be able to make it better or even worse. There always be 2 options.

756Followers
11Following
1631Visitors
Follow