share_log

OpenAI Fears 'Users Might Form Social Relationships' With AI Due To ChatGPT's Human-Like Voice Mode

OpenAI Fears 'Users Might Form Social Relationships' With AI Due To ChatGPT's Human-Like Voice Mode

OpenAI擔心ChatGPT概念股的人聲模式可能讓用戶與人工智能形成社交關係
Benzinga ·  08/09 07:45

OpenAI has expressed concerns about the potential emotional reliance of users on its new ChatGPT voice mode, which closely mimics human speech patterns.

OpenAI 對用戶可能在情感上依賴其新的 ChatGPT 語音模式表示擔憂,該模式密切模仿人類的語音模式。

What Happened: OpenAI, the company behind ChatGPT, has raised concerns about the possibility of users forming emotional bonds with the AI, potentially leading to a reduced need for human interaction.

發生了什麼:ChatGPT背後的公司OpenAI對用戶可能與人工智能形成情感紐帶表示擔憂,這可能會減少對人際互動的需求。

The Microsoft Corp.-backed company fears that this could affect healthy relationships and lead to an over-reliance on AI, given its potential for errors.

這家由微軟公司支持的公司擔心,鑑於人工智能可能出錯,這可能會影響健康的人際關係,並導致對人工智能的過度依賴。

The report, released on Thursday, highlights a broader risk associated with AI. Tech companies are rapidly rolling out AI tools that could significantly impact various aspects of human life, without a comprehensive understanding of the implications.

週四發佈的這份報告強調了與人工智能相關的更廣泛風險。科技公司正在迅速推出人工智能工具,這些工具可能會對人類生活的各個方面產生重大影響,但尚未全面了解其影響。

"Human-like socialization with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships." the report said.

「使用人工智能模型進行類人社交可能會產生影響人與人互動的外部因素。例如,用戶可能會與人工智能形成社交關係,從而減少他們對人際互動的需求——這可能會使孤獨的人受益,但可能會影響健康的人際關係。」 報告說。

It added "Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and 'take the mic' at any time, which, while expected for an AI, would be anti-normative in human interactions"

它補充說:「與模型的長期互動可能會影響社會規範。例如,我們的模型是尊重性的,允許用戶隨時打斷和'拿起麥克風',這雖然是人工智能所期望的,但在人際互動中卻是違規的。」

The report highlights the risk of users trusting the AI more than they should, given its potential for errors. OpenAI plans to continue studying these interactions to ensure the technology is used safely.

鑑於人工智能可能出錯,該報告強調了用戶對人工智能的信任超出應有的風險。OpenAI 計劃繼續研究這些相互作用,以確保安全使用該技術。

Why It Matters: The rise of AI has been a topic of concern for various experts. A Pew Research survey found that 52% of Americans are more concerned than excited about the increased use of AI. This wariness coincides with an uptick in awareness about AI, with individuals who are most aware expressing more concern than excitement about AI.

它爲何重要:人工智能的興起一直是各種專家關注的話題。皮尤研究中心的一項調查發現,有52%的美國人對越來越多地使用人工智能感到擔憂而不是興奮。這種謹慎態度與對人工智能的認識提高相吻合,最了解人工智能的人對人工智能表示的擔憂多於興奮。

AI's potential negative effects have also been highlighted in the context of cybersecurity. Sakshi Mahendru, a cybersecurity expert, emphasized the need for AI-powered solutions to combat the evolving landscape of cyber threats.

在網絡安全背景下,人工智能的潛在負面影響也得到了強調。網絡安全專家薩克希·馬亨德魯強調需要人工智能驅動的解決方案來應對不斷變化的網絡威脅格局。

Moreover, the phenomenon of AI "hallucination," where AI generates nonsensical or irrelevant responses, remains a significant issue. Even Tim Cook, CEO of Apple Inc. , admitted in a recent interview that preventing AI hallucinations is a challenge.

此外,人工智能 「幻覺」 現象仍然是一個重大問題,即人工智能產生無意義或無關緊要的反應。甚至蘋果公司首席執行官蒂姆·庫克也在最近的一次採訪中承認,防止人工智能幻覺是一項挑戰。

  • Mark Cuban Says Kamala Harris' VP Pick Tim Walz 'Can Sit At The Kitchen Table And Make You Feel Like You Have Known Him Forever'

  • 馬克·庫班說,賀錦麗的副總裁人選蒂姆·沃爾茲 「可以坐在廚房的桌子旁讓你感覺自己永遠認識他」

声明:本內容僅用作提供資訊及教育之目的,不構成對任何特定投資或投資策略的推薦或認可。 更多信息
    搶先評論