share_log

OpenAI Fears 'Users Might Form Social Relationships' With AI Due To ChatGPT's Human-Like Voice Mode

OpenAI Fears 'Users Might Form Social Relationships' With AI Due To ChatGPT's Human-Like Voice Mode

OpenAI担心ChatGPT概念股的人声模式可能让用户与人工智能形成社交关系
Benzinga ·  08/09 07:45

OpenAI has expressed concerns about the potential emotional reliance of users on its new ChatGPT voice mode, which closely mimics human speech patterns.

OpenAI对其新的ChatGPt语音模式表示担忧,该模式密切模拟人类语音模式,可能会导致用户情感上对其产生依赖。

What Happened: OpenAI, the company behind ChatGPT, has raised concerns about the possibility of users forming emotional bonds with the AI, potentially leading to a reduced need for human interaction.

事件经过:ChatGPt背后的OpenAI公司对用户形成与人工智能的情感联系可能会导致减少与人类互动的需求表示担心。

The Microsoft Corp.-backed company fears that this could affect healthy relationships and lead to an over-reliance on AI, given its potential for errors.

得到微软公司支持的OpenAI公司担心这可能会影响健康关系,并导致过度依赖人工智能的出现,因为它存在误判的潜力。

The report, released on Thursday, highlights a broader risk associated with AI. Tech companies are rapidly rolling out AI tools that could significantly impact various aspects of human life, without a comprehensive understanding of the implications.

该报告于周四发布,强调了与人工智能相关的更广泛风险。技术公司正在迅速推出可能会极大影响人类生活各个方面的人工智能工具,但并未全面了解其影响。

"Human-like socialization with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships." the report said.

该报告称:“与AI模型进行类似于与人类进行社交的互动可能会产生影响人类之间互动的外部因素。例如,用户可能会与AI建立社交关系,减少与人类互动的需求,这可能对孤独者有益,但可能影响健康关系。”

It added "Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and 'take the mic' at any time, which, while expected for an AI, would be anti-normative in human interactions"

它补充道:“与模型的长时间互动可能会影响社会规范。例如,我们的模型很尊重用户,允许用户随时中断和‘插话’,这对于一个AI来说是预期的,但在人际互动中反常。”

The report highlights the risk of users trusting the AI more than they should, given its potential for errors. OpenAI plans to continue studying these interactions to ensure the technology is used safely.

该报告强调了用户会对人工智能产生过度信任的风险,而OpenAI计划继续研究这些互动,以确保技术的安全使用。

Why It Matters: The rise of AI has been a topic of concern for various experts. A Pew Research survey found that 52% of Americans are more concerned than excited about the increased use of AI. This wariness coincides with an uptick in awareness about AI, with individuals who are most aware expressing more concern than excitement about AI.

为什么它很重要:人工智能的崛起一直是各界关注的话题。皮尤研究的一项调查发现,52%的美国人对人工智能的增加使用感到更担忧而不是兴奋。这种担忧与人们对人工智能的认识增加有关,最清楚人工智能的人表达的担忧比兴奋更多。

AI's potential negative effects have also been highlighted in the context of cybersecurity. Sakshi Mahendru, a cybersecurity expert, emphasized the need for AI-powered solutions to combat the evolving landscape of cyber threats.

人工智能的潜在负面影响也在网络安全方面得到了突显。网络安全专家Sakshi Mahendru强调了采用人工智能解决不断变化的网络威胁的必要性。

Moreover, the phenomenon of AI "hallucination," where AI generates nonsensical or irrelevant responses, remains a significant issue. Even Tim Cook, CEO of Apple Inc. , admitted in a recent interview that preventing AI hallucinations is a challenge.

此外,人工智能“幻觉”现象,即人工智能生成毫无意义或不相关的回答,仍是一个重大问题。甚至Apple Inc.的CEO蒂姆·库克在最近的一次采访中也承认,防止人工智能幻觉是一个挑战。

  • Mark Cuban Says Kamala Harris' VP Pick Tim Walz 'Can Sit At The Kitchen Table And Make You Feel Like You Have Known Him Forever'

  • 马克·库班说,卡玛拉·哈里斯的副总统提名蒂姆·沃尔兹“可以坐在餐桌旁,让你感觉你已经认识他很久了”.

声明:本内容仅用作提供资讯及教育之目的,不构成对任何特定投资或投资策略的推荐或认可。 更多信息
    抢沙发