share_log

Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B

PR Newswire ·  Nov 7, 2023 11:00
  • The Groq Language Processing Unit system is the AI assistance enablement technology poised to provide real-time, "low lag" experiences for users with its inference performance.

MOUNTAIN VIEW, Calif., Nov. 7, 2023 /PRNewswire/ -- Groq, an AI solutions company, announced it still holds the foundational Large Language Model (LLM) performance record for speed and accuracy amidst emerging market competition. Groq has set a new performance bar of more than 300 tokens per second per user on Meta AI's industry-leading LLM, Llama-2 70B, run on its Language Processing Unit system.

Groq announced it still holds the foundational Large Language Model (LLM) performance record for speed and accuracy amidst emerging market competition.
Groq now runs foundational LLM, Llama-2 70B at over 300 tokens per second per user.
Disclaimer: This content is for informational and educational purposes only and does not constitute a recommendation or endorsement of any specific investment or investment strategy. Read more
    Write a comment