Account Info
Log Out
English
Back
Log in to access Online Inquiry
Back to the Top

Meeting Minutes | AMD Data Center and AI Technology Premiere

avatar
Senorita Earnings wrote a column · Jun 15, 2023 02:53
Key Points:
1. AMD announced updates to its fourth-generation EPYC processor family, including over 640 instances and plans to launch 200 new instances by the end of 2023.
2. The fourth-generation AMD EPYC processor continues to be the highest-performing and most power-efficient AI CPUs, providing optimal performance for most AI workloads.
3. AMD has partnered with AWS to launch a preview of the EC2 M7a instance, which uses AMD's fourth-generation "Genoa" EPYC processor, offering new features and improved performance.
4. AMD has introduced the world's most advanced universal AI accelerator based on the next-generation CDNA3 accelerator architecture - the Instinct MI300X.
5. The ROCm software ecosystem can be used for data center accelerators and provides "Day-0" support integrated with PyTorch 2.0.
6. Additionally, AMD has released a powerful communication network product portfolio, including AMD Pensando DPU, AMD ultra-low-latency NIC, and AMD adaptive NIC. Among them, the DPU has been deployed in cloud partners such as IBM Cloud, Microsoft Azure, and Oracle Compute Infrastructure.
Full text:
At the "Data Center and AI Tech Day" event, AMD executives took to the stage with industry leaders from AWS, Citadel, Hugging Face, Meta, Microsoft Azure, and PyTorch to showcase their technology partnerships and push next-generation high-performance CPU and AI accelerator solutions to market.
一、Computing Infrastructure Optimized for Modern Data Centers
  AMD announced a series of updates to its fourth-generation EPYC processor family:
   1. Advancing the best data center CPU
  1)There are currently over 640 EPYC instances, with an additional 200 by the end of 2023.
  2)Fourth-generation AMD EPYC processor (code-named Genoa)
Continues to be the highest-performing and most power-efficient AI CPU
Currently, most AI workloads are driven by CPUs, and Genoa is the best AI CPU.
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
3)Dave Brown, VP of EC2 at AWS shared:
   AMD has been working with AWS since 2018, offering over 100 instances based on EPYC processors for general-purpose, compute-optimized, memory-optimized, and high-performance computing workloads.
In 2022, AWS launched the HPC-specific instance Amazon EC2 Hpc6a, which outperforms x86 instances by 65% in workloads such as computational fluid dynamics.
Customers:
 DNT, Sprinklr, and TrueCar have benefited from the cost and cloud utilization optimization of Amazon EC2 instances based on AMD.
 Customers are looking to bring new types of applications to AWS, such as financial applications, application servers, video transcoding, and simulation modeling.
  When Amazon combines the performance of the fourth-generation AMD EPYC processor with AWS Nitro system, they are advancing cloud technology for customers to do more on more EC2 instances with better performance.
 New collaboration
    已推出EC2 M7a实例预览,预计将在Q3全面上市;
    EC2 M7a instance preview has been launched, with full availability expected in Q3. The new instance uses AMD's fourth-generation "Genoa" EPYC processor.
The EC2 M7a instance offers new processor features like AVX3-512, VNNI, and BFloat16, and allows customers to get 50% more compute performance than M6a instances, bringing a wider range of workloads to AWS.
4)Outside of the event, Oracle announced plans to offer new Oracle Compute Infrastructure (OCI) E5 instances with fourth-generation AMD EPYC processors.
2. Cloud-Native Computing
1)Fourth-generation AMD EPYC 97X4 Processor (code-named Bergamo)
    128 Zen 4c cores per socket offering maximum vCPU density and industry-leading performance for cloud-based applications, along with leading energy efficiency.
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
    Alexis Bjorlin, VP of Infrastructure at Meta shared:
    Meta discussed with AMD the high applicability of these processors to their mainstream applications such as Instagram and WhatsApp.
    Compared to the third-generation AMD EPYC 97x4 processor, Meta saw impressive performance improvements across various workloads, while also offering significant TCO improvements.
    AMD and Meta also discussed how to optimize EPYC CPUs for Meta's energy efficiency and compute density requirements.
1)Fourth-generation AMD EPYC processors using AMD 3D V-Cache technology
  The world's highest-performance x86 server CPU for technical computing
Nidhi Chappell, GM of Azure HPC and AI at Microsoft shared:
Microsoft announced the general availability of Azure HBv4 and HX instances
These instances use fourth-generation AMD EPYC processors and AMD 3D V-Cache technology to help customers accelerate time-to-market and digitization processes
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
2. AMD AI Platform
AMD has announced a series of artificial intelligence platform strategies, providing customers with hardware product portfolios from the cloud to the edge to the endpoint and deep collaborations with industry software to develop scalable and ubiquitous AI solutions.
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
1) The world's most advanced universal AI accelerator
New details about the AMD Instinct MI300 series accelerator:
 AMD has launched the AMD Instinct MI300X accelerator, which is the world's most advanced universal AI accelerator.
 Based on the next-generation CDNA3 accelerator architecture, it supports up to 192GB of HBM3 memory, providing the necessary computational and memory efficiency for training and inference of large language models for universal AI workloads.
With the large memory of AMD Instinct MI300X, customers can now assemble large language models such as Falcon-40, a 40-billion-parameter model, on a single MI300X accelerator.
MI300X will be available to major customers for sampling from Q3.
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
   AMD has introduced the AMD Instinct platform, which integrates 8 MI300X accelerators into an industry-standard design, providing the ultimate solution for AI inference and training.
Meeting Minutes | AMD Data Center and AI Technology Premiere
The AMD Instinct MI300A is the world's first APU accelerator for HPC and AI workloads, and is currently available to customers for sampling.
Meeting Minutes | AMD Data Center and AI Technology Premiere
2) Bringing an open, proven, and ready artificial intelligence software platform to market.
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
ROCm software ecosystem for data center accelerators.
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
  PyTorch founder and Meta VP Soumith Chintala shared:
  Discussed the collaboration between AMD and PyTorch Foundation on the ROCm software stack.
Provided "Day-0" support for PyTorch 2.0 in ROCm 5.4.2 version of all AMD Instinct accelerators.
This integration enables developers to access a large number of PyTorch-driven AI models that are compatible and readily available on AMD accelerators.
  Hugging Face CEO Clement Delangue shared:
  Thousands of Hugging Face models will be optimized on AMD platforms, including AMD Instinct accelerators, AMD Ryzen and AMD EPYC processors, AMD Radeon GPUs, Versal and Alveo adaptive processors.
三、Powerful communication network product portfolio for cloud and enterprise
1)The communication network product portfolio includes AMD Pensando DPU, AMD Ultra Low Latency NICs, and AMD Adaptive NICs.
  AMD Pensando DPU combines a powerful software stack with zero-trust security and leading programmable packet processors, creating the most intelligent and high-performance DPU in the world.
AMD Pensando DPU has been widely deployed in cloud partners such as IBM Cloud, Microsoft Azure, and Oracle Compute Infrastructure.
In enterprises, AMD Pensando DPU is deployed in HPE Aruba CX 10000 intelligent switches in collaboration with leading IT service companies like DXC, and as part of VMware vSphere Distributed Services Engine to accelerate application performance for customers.
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
Meeting Minutes | AMD Data Center and AI Technology Premiere
2)Next-generation DPU roadmap with the codename "Giglio"
   Aimed at delivering stronger performance and efficiency than the current generation products, and expected to be launched by the end of 2023.
3)AMD Pensando Software-in-Silicon Developer Kit (SSDK)
  Enables customers to quickly develop or migrate services for deployment on AMD Pensando P4 programmable DPUs, and work in coordination with the rich existing capabilities on the AMD Pensando platform.

$Amazon(AMZN.US)$ $Advanced Micro Devices(AMD.US)$ $Meta Platforms(META.US)$ $Oracle(ORCL.US)$ $Microsoft(MSFT.US)$
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only. Read more
2
3
+0
Translate
Report
35K Views
Comment
Sign in to post a comment
    Another earnings season is here. What to expect this time?
    800Followers
    14Following
    1269Visitors
    Follow