share_log

Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B

Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B

Groq 在 Meta AI Foundational LLM、Llama-2 70B 上創下了每位用戶每秒 300 個代幣的新大型語言模型性能記錄
PR Newswire ·  2023/11/07 11:00
  • The Groq Language Processing Unit system is the AI assistance enablement technology poised to provide real-time, "low lag" experiences for users with its inference performance.
  • Groq 語言處理單元系統是一項 AI 輔助支持技術,可憑藉其推理性能爲用戶提供實時、“低延遲” 的體驗。

MOUNTAIN VIEW, Calif., Nov. 7, 2023 /PRNewswire/ -- Groq, an AI solutions company, announced it still holds the foundational Large Language Model (LLM) performance record for speed and accuracy amidst emerging market competition. Groq has set a new performance bar of more than 300 tokens per second per user on Meta AI's industry-leading LLM, Llama-2 70B, run on its Language Processing Unit system.

加利福尼亞州山景城2023年11月7日 /PRNewswire/ — Groq一家人工智能解決方案公司宣佈,在新興市場競爭中,它在速度和準確性方面仍保持基本的大語言模型(LLM)性能記錄。Groq 在 Meta AI 行業領先的 LLM Llama-2 上設定了每位用戶每秒 300 多個代幣的新性能標準 70B,在其語言處理單元系統上運行。

Groq announced it still holds the foundational Large Language Model (LLM) performance record for speed and accuracy amidst emerging market competition.
Groq 宣佈,在新興市場競爭中,它在速度和準確性方面仍保持基本的大型語言模型 (LLM) 性能記錄。
Groq now runs foundational LLM, Llama-2 70B at over 300 tokens per second per user.
Groq 現在以每位用戶每秒超過 300 個代幣的速度運行基礎 LLM Llama-2 70B。
声明:本內容僅用作提供資訊及教育之目的,不構成對任何特定投資或投資策略的推薦或認可。 更多信息
    搶先評論