摘要:管理類聯(lián)考考研的七個(gè)專業(yè)的考試科目都包含204考研英語(yǔ)(二),為方便考生們提高英語(yǔ)寫作水平,希賽網(wǎng)為考生整理了考研英語(yǔ)二經(jīng)典外刊選讀文章,方便各位考生備考。
本文為管理類聯(lián)考考研英語(yǔ)二外刊選讀第十五篇,可點(diǎn)擊上方藍(lán)色圖標(biāo)“本文資料”,免費(fèi)獲取更多管理類聯(lián)考考研英語(yǔ)二外刊選讀內(nèi)容,方便各位考生備考、了解考試內(nèi)容。
ChatGPT raises a debate over how human learn language
題材:科普類
出處:The Economist《經(jīng)濟(jì)學(xué)人》
字?jǐn)?shù):739 words
[1] When deep blue, a chess computer, defeated Garry Kasparov, a world champion, in 1997 many gasped in fear of machines triumphing over mankind. In the intervening years, artificial intelligence has done some astonishing things, but none has managed to capture the public imagination in quite the same way. Now, though, the astonishment of the Deep Blue moment is back, because computers are employing something that humans consider their defining ability: language.
【1997年,當(dāng)國(guó)際象棋計(jì)算機(jī)深藍(lán)擊敗世界冠軍加里·卡斯帕羅夫時(shí),許多人因害怕機(jī)器戰(zhàn)勝人類而倒吸一口冷氣。在這幾年里,人工智能做了一些令人驚訝的事情,但沒有一件能以完全相同的方式引發(fā)公眾的想象力。然而,現(xiàn)在,深藍(lán)時(shí)刻又回來(lái)了,因?yàn)橛?jì)算機(jī)正在使用人類認(rèn)為自己具有定義能力的東西:語(yǔ)言?!?/p>
【重點(diǎn)詞匯】
gasp /ɡɑ?sp/ v. (尤指驚訝或疼痛時(shí)的)倒吸氣
triumph /?tra??mf/ v. 打敗 n. 巨大成功
intervening /??nt??vi?n??/ adj. 發(fā)生于其間的
astonishing /??st?n????/ adj. 令人十分驚訝的
[2] Or are they? Certainly, large language models (LLMs), of which the most famous is ChatGPT, produce what looks like impeccable human writing. But a debate has ensued about what the machines are actually doing internally, what it is that humans, in turn, do when they speak—and, inside the academy, about the theories of the world’s most famous linguist, Noam Chomsky.
【真的是嗎?當(dāng)然,大型語(yǔ)言模型 (LLMs),其中最著名的是ChatGPT,可以產(chǎn)生看起來(lái)無(wú)可挑剔的人類寫作。但是隨之而來(lái)的爭(zhēng)論是,機(jī)器內(nèi)部到底是怎么運(yùn)作的,反過來(lái),人類在說(shuō)話時(shí)人體內(nèi)部又在做什么,而在學(xué)術(shù)界,爭(zhēng)論的焦點(diǎn)是世界上最著名的語(yǔ)言學(xué)家諾姆·喬姆斯基的理論?!?/p>
【重點(diǎn)詞匯】
impeccable /?m?pek?b(?)l/ adj. 無(wú)可挑剔的
internally /?n?t??n?li/ adv. 在內(nèi)部
【長(zhǎng)難句分析】
Certainly, large language models (LLMs), of which the most famous is ChatGPT, produce what looks like impeccable human writing.
【結(jié)構(gòu)分析】
主句:large language models (LLMs) produce
定語(yǔ)從句:of which the most famous is ChatGPT
賓語(yǔ)從句:what looks like impeccable human writing
[3] Although Professor Chomsky’s ideas have changed considerably since he rose to prominence in the 1950s, several elements have remained fairly constant. He and his followers argue that human language is different in kind (not just degree of expressiveness) from all other kinds of communication. All human languages are more similar to each other than they are to, say, whale song or computer code. Professor Chomsky has frequently said a Martian visitor would conclude that all humans speak the same language, with surface variation.
【盡管喬姆斯基教授自20世紀(jì)50年代成名以來(lái),他的思想發(fā)生了很大的變化,但有幾個(gè)方面卻在很大程度上保持不變。他和他的追隨者認(rèn)為,人類語(yǔ)言與所有其他形式的交流在種類上(不僅僅是表達(dá)程度)是不同的。所有的人類語(yǔ)言彼此之間的相似性比鯨魚的歌聲或計(jì)算機(jī)代碼更大。喬姆斯基教授經(jīng)常說(shuō),火星訪客會(huì)得出結(jié)論,即所有人都說(shuō)同一種語(yǔ)言,只是表面上有所不同?!?/p>
【重點(diǎn)詞匯】
considerably /k?n?s?d?r?bli/ adv. 相當(dāng)多地
prominence /?pr?m?n?ns/ n. 出名
fairly /?fe?li/ adj. 在很大程度上
degree /d??ɡri?/ n. 程度
frequently /?fri?kw?ntli/ adv. 頻繁地
variation /?ve?ri?e??(?)n/ n. 變化
[4] Perhaps most notably, Chomskyan theories hold that children learn their native languages with astonishing speed and ease despite “the poverty of the stimulus”: the sloppy and occasional language they hear in childhood. The only explanation for this can be that some kind of predisposition for language is built into the human brain.
【也許最值得注意的是,喬姆斯基的理論認(rèn)為,盡管“缺乏刺激”: 他們?cè)谕陼r(shí)聽到的是草率和偶然的語(yǔ)言,但兒童學(xué)習(xí)母語(yǔ)的速度和輕松程度是驚人的。對(duì)此的唯一解釋是,人類大腦中存在某種語(yǔ)言傾向?!?/p>
【重點(diǎn)詞匯】
stimulus /?st?mj?l?s/ n. 刺激
sloppy /?sl?pi/ adj. 草率的
occasional /??ke???n(?)l/ adj. 偶然的
predisposition /?pri?d?sp??z??(?)n/ n. 傾向
[5] Chomskyan ideas have dominated the linguistic field of syntax since their birth. But many linguists are strident anti-Chomskyans. And some are now seizing on the capacities of LLMs to attack Chomskyan theories anew.
【喬姆斯基的思想自誕生之日起就占據(jù)了語(yǔ)言學(xué)領(lǐng)域的主導(dǎo)地位。但許多語(yǔ)言學(xué)家都是強(qiáng)硬的反喬姆斯基主義者。一些人現(xiàn)在正利用LLMs的能力,重新攻擊喬姆斯基的理論?!?/p>
【重點(diǎn)詞匯】
dominate /?d?m?ne?t/ v. 占據(jù)主導(dǎo)地位
strident /?stra?d(?)nt/ adj. 強(qiáng)硬的
seize on 利用
[6] Grammar has a hierarchical, nested structure involving units within other units. Words form phrases, which form clauses, which form sentences and so on. Chomskyan theory posits a mental operation, “Merge”, which glues smaller units together to form larger ones that can then be operated on further (and so on). In a recent New York Times op-ed, the man himself (now 94) and two co-authors said “we know” that computers do not think or use language as humans do, referring implicitly to this kind of cognition. LLMs, in effect, merely predict the next word in a string of words.
【語(yǔ)法具有層次結(jié)構(gòu),嵌套結(jié)構(gòu),涉及單元之間的單元。單詞組成短語(yǔ),短語(yǔ)組成分句,分句組成句子,以此類推。喬姆斯基的理論假定了一種心理操作,即“合并”,它將較小的單元粘合在一起,形成更大的單元,然后可以進(jìn)一步操作(以此類推)。在《紐約時(shí)報(bào)》最近的一篇專欄文章中,他本人(現(xiàn)年94歲)和兩位合著者表示,“我們知道”計(jì)算機(jī)不像人類那樣思考或使用語(yǔ)言,暗指這種認(rèn)知。LLMs實(shí)際上只是預(yù)測(cè)單詞串中的下一個(gè)單詞。】
【重點(diǎn)詞匯】
hierarchical /?ha???rɑ?k?k(?)l/ adj. 按等級(jí)劃分的
posit /?p?z?t/ v. 假設(shè)
implicitly /?m?pl?s?tli/ 含蓄地
【7】Yet it is hard, for several reasons, to fathom what LLMs “think”. Details of the programming and training data of commercial ones like ChatGPT are proprietary. And not even the programmers know exactly what is going on inside.
【然而,由于一些原因,很難理解LLMs在“思考”什么。像ChatGPT這樣的商業(yè)軟件的編程細(xì)節(jié)和訓(xùn)練數(shù)據(jù)是專利的。甚至連程序員都不知道里面到底發(fā)生了什么?!?/p>
【重點(diǎn)詞匯】
fathom /?f?e?m/ v. 理解
proprietary /pr??pra??t(?)ri/ adj. 專利的
【8】Linguists have, however, found clever ways to test LLMs’ underlying knowledge, in effect tricking them with probing tests. And indeed, LLMs seem to learn nested, hierarchical grammatical structures, even though they are exposed to only linear input, ie, strings of text. They can handle novel words and grasp parts of speech. Tell ChatGPT that “dax” is a verb meaning to eat a slice of pizza by folding it, and the system deploys it easily: “After a long day at work, I like to relax and dax on a slice of pizza while watching my favourite TV show.” (The imitative element can be seen in “dax on”, which ChatGPT probably patterned on the likes of “chew on” or “munch on”.)
【然而,語(yǔ)言學(xué)家已經(jīng)找到了一些聰明的方法來(lái)測(cè)試LLMs的基礎(chǔ)知識(shí),實(shí)際上是用探查性的測(cè)試來(lái)欺騙他們。事實(shí)上,LLMs似乎可以學(xué)習(xí)嵌套的、分層的語(yǔ)法結(jié)構(gòu),即使他們只接觸線性輸入,即文本字符串。他們能處理新單詞,掌握部分詞性。告訴ChatGPT,“dax”是一個(gè)動(dòng)詞,意思是把一片披薩折疊起來(lái)吃,系統(tǒng)很容易就能把它應(yīng)用起來(lái):“在漫長(zhǎng)的一天工作之后,我喜歡放松一下,一邊看我最喜歡的電視節(jié)目,一邊咀嚼披薩?!?模仿元素可以在“dax on”中看到,ChatGPT可能模仿了“chew on”或“munch on”之類的單詞。)】
【重點(diǎn)詞匯】
underlying /??nd??la???/ adj. 基礎(chǔ)的
probing /?pr??b??/ adj. 探查性的
deploy /d??pl??/ v. 有效地利用
imitative /??m?t?t?v/ adj. 模仿的
【9】What about the “poverty of the stimulus”? After all, GPT-3 (the LLM underlying ChatGPT until the recent release of GPT-4) is estimated to be trained on about 1,000 times the data a human ten-year-old is exposed to. That leaves open the possibility that children have an inborn tendency to grammar, making them far more proficient than any LLM. In a forthcoming paper in Linguistic Inquiry, researchers claim to have trained an LLM on no more text than a human child is exposed to, finding that it can use even rare bits of grammar. But other researchers have tried to train an LLM on a database of only child-directed language (that is, of transcripts of carers speaking to children). Here LLMs fare far worse. Perhaps the brain really is built for language, as Professor Chomsky says.
【那么“缺乏刺激”呢? 畢竟,據(jù)估計(jì),GPT-3(在最近發(fā)布GPT-4之前,基于LLM的ChatGPT)所接受的訓(xùn)練數(shù)據(jù)大約是10歲兒童所接觸數(shù)據(jù)的1000倍。這就留下了一種可能性,即孩子們天生就有語(yǔ)法傾向,這使得他們比任何LLM都要精通得多。在即將發(fā)表在《語(yǔ)言學(xué)探究》(Linguistic Inquiry)上的一篇論文中,研究人員聲稱,他們訓(xùn)練的LLM學(xué)習(xí)的文本并不比人類兒童接觸的文本多,他們發(fā)現(xiàn),LLM甚至可以使用一些罕見的語(yǔ)法。但其他研究人員已經(jīng)嘗試在一個(gè)僅針對(duì)兒童的語(yǔ)言數(shù)據(jù)庫(kù)(即看護(hù)人與兒童交談的文字記錄)上訓(xùn)練LLM。在這里,LLMs的處境要糟糕得多。也許正如喬姆斯基教授所說(shuō),大腦真的是為語(yǔ)言而生的?!?/p>
【重點(diǎn)詞匯】
proficient /pr??f??(?)nt/ adj. 精通的
forthcoming /?f??θ?k?m??/ adj. 即將發(fā)生的
rare /re?(r)/ adj. 罕見的
transcript /?tr?nskr?pt/ n. (根據(jù)錄音或筆記整理的)文字本
【10】It is difficult to judge. Both sides of the argument are marshalling LLMs to make their case. The eponymous founder of his school of linguistics has offered only a brusque riposte. For his theories to survive this challenge, his camp will have to put up a stronger defence.
【這很難判斷。爭(zhēng)論的雙方都在召集LLMs來(lái)證明自己的觀點(diǎn)。他的語(yǔ)言學(xué)學(xué)派的同名創(chuàng)始人只給出了一個(gè)無(wú)禮的反駁。為了讓他的理論經(jīng)受住挑戰(zhàn),他的陣營(yíng)必須建立更強(qiáng)大的防御?!?/p>
【重點(diǎn)詞匯】
marshal /?mɑ??(?)l/ v. 召集
eponymous /??p?n?m?s/ adj. 同名的
brusque /bru?sk/ adj. 無(wú)禮的
riposte /r??p?st/ n. 巧妙的反駁
put up 建造
需獲取更多管理類聯(lián)考考研英語(yǔ)二外刊選讀內(nèi)容,可點(diǎn)擊下方“資料下載”處,免費(fèi)下載!
工商管理MBA備考資料免費(fèi)領(lǐng)取
去領(lǐng)取
共收錄117.93萬(wàn)道題
已有25.02萬(wàn)小伙伴參與做題