Alphabet Inc. (NASDAQ:GOOG) (NASDAQ:GOOGL) co-founder, Sergey Brin, admitted that the tech giant was overly cautious in deploying language models, despite being pioneers in the field.
What Happened: While appearing at the All-In Summit earlier this week, Brin discussed the company's hesitance in deploying language models.
Brin acknowledged that Google had been "too timid to deploy" language models, despite having essentially invented them with the publication of the "Transformer" paper seven years ago. He attributed this hesitance to a fear of making mistakes and causing embarrassment.
The co-founder highlighted the need for risk-taking in the tech industry, stating that while these models can make "really stupid mistakes," they are also "incredibly powerful" and can facilitate tasks that would otherwise take a significant amount of time to learn.
Despite the potential for occasional errors, Brin expressed his belief that these models should be released to the public for experimentation, rather than being kept "close to the chest and hidden until it's like perfect."
Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.
Why It Matters: Google's cautious approach to AI deployment has been a topic of discussion in the tech industry.
In May, Deepwater Asset Management's managing partner Gene Munster said that Google was six months behind ChatGPT-parent OpenAI in AI development, but still about five years ahead of the rest of the industry.
This was followed by criticism from a 16-year Google veteran who claimed that the company's AI projects were driven by "panic" rather than user needs.
Previously, an internal email from Microsoft Corporation revealed their concerns about falling behind Alphabet's AI capabilities. The report suggested that Microsoft's investment in OpenAI was driven by the tech giant's fear of losing AI grounds to Google.
Check out more of Benzinga's Consumer Tech coverage by following this link.
Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.