Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B
Groq Sets New Large Language Model Performance Record of 300 Tokens per Second per User on Meta AI Foundational LLM, Llama-2 70B
Groq 在 Meta AI Foundational LLM、Llama-2 70B 上创下了每位用户每秒 300 个代币的新大型语言模型性能记录
- The Groq Language Processing Unit system is the AI assistance enablement technology poised to provide real-time, "low lag" experiences for users with its inference performance.
- Groq 语言处理单元系统是一项 AI 辅助支持技术,可凭借其推理性能为用户提供实时、“低延迟” 的体验。
MOUNTAIN VIEW, Calif., Nov. 7, 2023 /PRNewswire/ -- Groq, an AI solutions company, announced it still holds the foundational Large Language Model (LLM) performance record for speed and accuracy amidst emerging market competition. Groq has set a new performance bar of more than 300 tokens per second per user on Meta AI's industry-leading LLM, Llama-2 70B, run on its Language Processing Unit system.
加利福尼亚州山景城, 2023年11月7日 /PRNewswire/ — Groq一家人工智能解决方案公司宣布,在新兴市场竞争中,它在速度和准确性方面仍保持基本的大语言模型(LLM)性能记录。Groq 在 Meta AI 行业领先的 LLM Llama-2 上设定了每位用户每秒 300 多个代币的新性能标准 70B,在其语言处理单元系统上运行。