专栏名称: 考研英语外刊阅读
按照考研英语真题的风格和类型精选同源外刊文章,节选《经济学人》、《卫报》、《时代周刊》等时政报刊,囊括长难句翻译、考研高频词汇词组、每日背单词、英语写作佳句积累等模块,全方位一举击破考研英语。
目录
相关文章推荐
51好读  ›  专栏  ›  考研英语外刊阅读

外刊阅读20240710|AI开始胡言乱语了

考研英语外刊阅读  · 公众号  ·  · 2024-07-10 07:59

正文


快⬆️⬆️点击上方 蓝字 关注并星标 个公众号,一起涨姿势~


词数:383 words

难度:★★★☆☆


小贴士:

后台聊天对话框:

回复“ 电子书 获取2025考研英语备考讲义、单词带背等课程

回复“ PDF 获取本文PDF版内容

回复“ 笔记 获取阅读、新题型、完形填空、写作模板等笔记

点击文末左下角“ 阅读原文 ”可获得PDF版内容、 每月文章合集

——大橙子留


+

+

上期划线句答案

Each chip starts with a physiologically based pharmacokinetic model, known as a PBPK model—a mathematical expression of how a chemical compound behaves in a human body .

每个芯片都从一种基于生理的药代动力学模型开始,简称为PBPK模型——这是化学化合物在人体内的运行规律的数学表达。

+

+

本期内容


双语阅读


Para.1


Large language models (LLMs) are sophisticated computer programs designed to generate human-like text. They achieve this by analyzing vast amounts of written material and using statistical techniques to predict the likelihood of a particular word appearing next in a sequence . This process enables them to produce coherent and contextually appropriate responses to a wide range of prompts. Unlike human brains, which have a variety of goals and behaviors, LLMs have a singular objective: to generate text that closely resembles human language. This means their primary function is to replicate the patterns and structures of human speech and writing, not to understand or convey factual information.



大型语言模型是一种高级计算机程序,旨在生成类似人类的文本。具体是这样实现的:它们通过分析大量书面材料,并运用统计技术来预测下一个词接下来按顺序出现的概率。这个过程使它们能够对各种提示生成连贯且符合上下文的回复。人类大脑有各种目标和行为,而与大脑不同,大型语言模型则专注于一个单一的目标:生成类似人类语言的文本。这意味着它们的主要任务是复制人类说话和写作的模式及结构,而不是理解或传达事实信息。

点击此处查看翻译


1. written material 书面材料

2. in a sequence 按顺序;依次

点击此处查看词汇词组


Para.2


The term “AI hallucination ” is used to describe instances when an LLM like ChatGPT produces inaccurate or entirely fabricated information. This term suggests that the AI is experiencing a perceptual error, akin to a human seeing something that isn’t there. However, this metaphor is misleading, according to Hicks and his colleagues, because it implies that the AI has a perspective or an intent to perceive and convey truth, which it does not.



“人工智能幻觉”用来描述像ChatGPT这样的大型语言模型生成不准确或完全虚构信息的情况。这个术语表明人工智能正在经历感知错误,类似于人类看到并不存在的东西。 翻译划线句,在文末留言打卡,答案下期公布~

点击此处查看翻译


1. hallucination

/həˌluːsɪˈneɪʃən/

n. 幻觉,幻想;错觉

2. fabricate

/ˈfæbrɪˌkeɪt/

v. 制造,生产; 捏造,编造; 组装,装配

3. metaphor

/ˈmɛtəfə/

n. 隐喻,暗喻; 象征,标志

点击此处查看词汇词组


Para.3


To better understand why these inaccuracies might be better described as bullshit, it is helpful to look at the concept of bullshit as defined by philosopher Harry Frankfurt. In his seminal work, Frankfurt distinguishes bullshit from lying. A liar, according to Frankfurt, knows the truth but deliberately chooses to say something false. In contrast, a bullshitter is indifferent to the truth. The bullshitter’s primary concern is not whether what they are saying is true or false but whether it serves their purpose, often to impress or persuade.



为了更好地理解为什么这些错误信息更适合称为“胡扯”,我们可以看看哲学家哈里·弗兰克福特对“胡扯”的解释。在他的经典著作中,弗兰克福特区分了“胡扯”和撒谎。根据弗兰克福特的说法,撒谎的人是知道真相却故意说谎,而“胡扯”的人则对真相并不关心。胡说八道的人说话的主要目的并不在于内容的真假,而是看这些话是否能达到他们的目的,通常是为了给人留下印象或者说服别人。

点击此处查看翻译


1. seminal

/ˈsɛmɪnəl/

adj. 有创造力的,对未来有影响的;重大的;种子的;精液的;生殖的

点击此处查看词汇词组


Para.4


The distinction is significant because it influences how we understand and address the inaccuracies produced by these models. If we think of these inaccuracies as hallucinations, we might believe that the AI is trying and failing to convey truthful information. But AI models like ChatGPT do not have beliefs, intentions, or understanding, Hicks and his colleagues explained. They operate purely on statistical patterns derived from their training data.



这二者的区别很重要,因为它影响我们如何理解和处理这些(大语言)模型产生的错误。希克斯和他的同事解释说,如果我们把这些错误看作幻觉,我们可能会认为人工智能试图传达真实信息,但却失败了。然而,像ChatGPT这样的人工智能模型并没有信念、意图或理解。它们仅仅是根据训练数据中的统计模式运行而已。

点击此处查看翻译



Para.5








请到「今天看啥」查看全文