We are also seeing strong momentum in Llama, and this year, the usage of Llama tokens has grown exponentially. The more widely Llama is adopted and becomes an industry standard, the more its quality and efficiency improvements can be fed back into all of our products. This quarter, we released Llama 3.2, which includes leading small models running on devices and open source MultiModal Machine Learning models. We are working with enterprises to make it easier to use. Now, we are also working with the public sector to make Llama widely adopted in the US government. Llama 3 can be said to be a turning point in the industry. But I am more concerned about Llama 4, which is currently in development. We are training Llama 4 models on a cluster larger than 100,000 H100s, larger than anyone else I have seen doing. Smaller Llama 4 models will be ready first, and we expect them to be released sometime early next year. I think they will play an important role in multiple ways, including new modes, new capabilities, stronger inference, and faster speed. In my opinion, open source will be the most cost-effective, customizable, reliable, highest-performing, and easiest-to-use option available to developers. I am proud that Llama is leading in this regard.