share_log

Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B

Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B

Groq 在 Meta AI Foundational LLM、Llama-2 70B 上创下了每位用户每秒 300 个代币的新大型语言模型性能记录
PR Newswire ·  2023/11/07 11:00
  • The Groq Language Processing Unit system is the AI assistance enablement technology poised to provide real-time, "low lag" experiences for users with its inference performance.
  • Groq 语言处理单元系统是一项 AI 辅助支持技术,可凭借其推理性能为用户提供实时、“低延迟” 的体验。

MOUNTAIN VIEW, Calif., Nov. 7, 2023 /PRNewswire/ -- Groq, an AI solutions company, announced it still holds the foundational Large Language Model (LLM) performance record for speed and accuracy amidst emerging market competition. Groq has set a new performance bar of more than 300 tokens per second per user on Meta AI's industry-leading LLM, Llama-2 70B, run on its Language Processing Unit system.

加利福尼亚州山景城2023年11月7日 /PRNewswire/ — Groq一家人工智能解决方案公司宣布,在新兴市场竞争中,它在速度和准确性方面仍保持基本的大语言模型(LLM)性能记录。Groq 在 Meta AI 行业领先的 LLM Llama-2 上设定了每位用户每秒 300 多个代币的新性能标准 70B,在其语言处理单元系统上运行。

Groq announced it still holds the foundational Large Language Model (LLM) performance record for speed and accuracy amidst emerging market competition.
Groq 宣布,在新兴市场竞争中,它在速度和准确性方面仍保持基本的大型语言模型 (LLM) 性能记录。
Groq now runs foundational LLM, Llama-2 70B at over 300 tokens per second per user.
Groq 现在以每位用户每秒超过 300 个代币的速度运行基础 LLM Llama-2 70B。
声明:本内容仅用作提供资讯及教育之目的,不构成对任何特定投资或投资策略的推荐或认可。 更多信息
    抢沙发