Nvidia has launched a new GPU inference platform (L4, L40, H100 NVL, Grace Hopper chip) with four configurations for accelerating AI video, image generation, large-scale language model deployment, and recommendation systems. Among them, the DGXH100, which is suitable for the training phase, has 8 H100 GPU modules, which can provide 32PetaFLOPS computing power at FP8 precision, and provide a complete NVIDIA AI software stack to help simplify AI development. The NVIDIA DGXH100 AI supercomputer is currently in full production. Many companies in the field of generative AI, including cloud computing giants, are using the H100 GPU provided by Nvidia to accelerate their work. Accelerated computing products and the development of the AI industry coexist and co-prosper. Nvidia continues to provide a more powerful computing power base for larger-scale AI model training, which has played an important role in promoting the cutting-edge development of AI training and reasoning. The booming AI boom has brought a broader market and opportunity.