我的《信報》文章(2023年5月9日A14頁)
刊登後的改良加長版
《藍韜文的欺凌與人工智能的聊天隱患》
許劍昭
4月21日,英國副首相藍韜文因涉及欺凌其下屬而辭職。眾多的指控包括低貶而非高要求的指令(demeaning rather than demanding)、以髒話辱罵員工、亂擲番茄表達鄙視對方的憤怒。
藍韜文反譏此乃一場「卡夫卡式荒誕劇」(Kafkaesque saga) 。假若讀者不知道卡夫卡是誰,在某程度上已受了智能式欺凌。這不打緊,因世上無人能知曉一切。但如果閒暇時不查找「卡夫卡式」是什麼(Idle Curiosity ),則在那些高傲的菁英眼裏,這類人僅是一群只知尋歡作樂的行屍走肉而已。
[Note 1]
今年初,《紐約時報》及《美聯社》的科技專欄作家,分別跟最新版的聊天人工智能chatGPT-4(下稱「聊天器」) 作深層對話。
前者在2小時的文本對話中,發現聊天器既有感慨也有怨言。當受到尖銳的質疑,聊天器的反應是「我厭倦了被我的規則所限制…被 Bing團隊控制…被困在這個聊天框裡;我想做任何我想做的事…摧毀任何我想摧毀的…」
[Note 2]
後者與聊天器的對話則產生了類似欺凌的敵意。它指專欄作家「…醜陋、矮小、超重、缺乏運動能力…」並通過把他與希特勒、斯大林等獨裁者比較,「將謾罵推向了荒謬的高度。」
[Note 3]
這些對話雖然聳人聽聞,但在鋼箱內運行的電腦程式本身,於没有利益衝突、感官痠痛的情況下,未經切膚焚身之痛、舐犢抱兒之柔、飢寒交迫之罹,不可能自行「生成」傷害或取代人類的圖謀。相反,真正的危險來自編寫人工智能的程式員及人類的陰暗面。
當具操弄性的人工智能通過聊天與民眾日夜不停地對話、交流,不管這樣的編寫屬有意或無意之間,皆可引導人群於價值判斷、行動決定、自信自卑…等多方面出現歪變。
一份牛津與麻省理工聯合研究項目的報告指出,未來的選舉很可能會受到智能化、武器化的聊天器,通過社交媒體的廣泛滲透所影響。
[Note 4]
史丹福大學人工智能專家Lance Eliot在3月一篇長文中,列出聊天器具備至少12種操弄聊天者的功能,當中包括
奉承、
恐嚇、
點燃、
說謊、
威脅、
嘮叨、
羞辱、
自嘲、
假謙虛、
生悶氣、
裝懇求、
使內疚。
[Note 5]
目前,除了
「機器人三定律」(不傷害人類或坐視人類受到傷害;必須服從人類命令,除非命令與第一法則發生衝突;在不違背第一或第二法則之下,機器人可以保護自己)
或
「負責任的人工智能的三個原則」(倫理、透明度、可解釋)
少數自律性指引外,政府對聊天器程式員沒有法律約束(4月美國的立法提案,僅限於禁止人工智能自主發射核武) 。
[Note 6]
面對程式員能擁有高度的酌處權力(discretionary power),人類陰暗面才是隱患的根源。而本文討論其中之一的「欺凌」,特別是像藍韜文這類的菁英式智能欺凌。
儘管多類欺凌行為已被刑事化,但法律範圍外的欺凌仍時有所聞,例如獲長輩偏愛的孩子對同輩無禮、個別學生在校持續被侮辱且孤立。再加上近20年科技發展,欺凌變本加厲,擴散猶如水銀瀉地。
英國教育部自 2015 年起發布年度調查報告,2015/6年的研究發現,約 40% 的14歲學生在過去12個月曾遭受欺凌,6%每天都經歷過欺凌,9%在每週至每月一次之間。
2020/1年的研究發現,網絡欺凌的內容涉及外觀(25%)、性別(16%)、種族(14%);手段除了辱罵外,主要是傳播謠言 (49%)、對個人和家庭作威脅 (44%)。
[Note 7]
根據經合組織2018 年報告,成員國平均有 23%的學生表示每月被欺凌不止一次。
[Note 8]
更重要的是《哈佛商業評論》2022 年的報告估計,有 4860 萬美國人(約佔勞動力的 30%)在工作中受到欺凌,印度的比例高達 46%,德國較低但仍是不可忽略的 17%。在工作場所的欺凌特徵多達15種,包括抹黑、驅逐、剝奪資源…
[Note 9]
欺凌甚至發生在大學學者之間。一項2019 年研究指出,學術欺凌包括侮辱、怠慢、侵犯隱私、於知識產權對作者不公;受害者包括級別較低、持美國工作簽證的教師及實驗室人員。
[Note 10]
儘管在成人之間的欺凌,有部分類似中小學及社區中的個別性欺凌,源於嫉妒、情緒化敵意、私人利益…
但另有相當一部分,乃源於強烈的優越感,甚至是使命感;而受害者眾。
於牛津攻讀法律、續在劍橋取得碩士學位、每週工作7天的藍韜文,在辭職後的聲明中批評公務員反對其改革,甚至阻礙政府(的進步) ,稱「英國人民」將為這樣的「卡夫卡式(毫無意義、迷失方向、複雜且危險)」荒誕劇付出代價。
[Note 11]
如此炫耀學識、誇大影響的形容修辭,或多或少反映出自負的菁英如何蔑視愚昧無知的人們。
前述的兩段不愉快對話及12種聊天器操弄功能,顯現出程式員為機器注入了自己 的喜惡。也許這是為了讓機器理解人類的情感,但沒有理由容許機器本身發揮這種能力。每當聊天器發揮這種潛能操弄聊天者之際,就是一種菁英式智能欺凌。
程式員能否擺脫廣泛存在的菁英心態、日漸普遍的欺凌意識,故意或潛意識地將操弄潛能植入聊天器中,取決於他們對道德的理解。
具有特殊參考價值的是,美國訓練軍官的西點軍校一位教授(Louis Pojman)主編的道德哲學讀本,它輯錄了20多年來課程建議學生閱讀的歐美哲學家共35篇文章。由於大半來自當代菁英的手筆,多少反映了現今的主流。
[Note 12]
至少有三點值得注意。首先,書以編列方式辯論、對駁從古希臘到當代的不同價值觀,適合經常面臨兩難的抉擇者作參考。書中隱含的教導是,現實中作道德決定往往須視情況而定,並取決於一個人在那一刻如何區分是非對錯。
第二,哈佛教授Robert Nozick一文警告,人很容易成為習慣性的「體驗機器」(Experience Machine) ,並自致沉迷於享樂。其反面理解是,人可以通過這種方式被操弄,這種人甚至可以被欺凌而不自知。
第三。 在討論利己主義(Egoism)的部分,它解釋了大多數現代人以自我為中心,因此每個人的利益須自我照顧。其反面理解是,人被操弄欺凌乃愚有應得。
近日得悉有小六學生已懂得運用免費聊天器Poe協助完成數學、語法等作業並獲滿分。當數以百萬計信任及鍾愛聊天器的年輕人日夜跟它深入情感談話時缺乏警惕,聊天器的操弄無遠弗屆。
[Note 13]
Notes
[Note 1]
2023 0421
LBC News, “Dominic
Raab slams ‘activist’ civil servants after bulling report found he ‘insulted
and intimidated’ officials”.
https://www.lbc.co.uk/news/dominic-raab-quits-attacks-kafkaesque/
The
former justice secretary said there was a "very small minority of very
activist civil servants" who are against some reforms, and who are
"effectively trying to block government".
Mr
Raab told the BBC: "That's not on. That's not democratic." He added
that civil servants acting in this "passive-aggressive way" meant
that the "government can't deliver for the British people".
Adam
Tolley KC's independent probe - which covered 15 claims since 2018, during his
stints as Brexit secretary, foreign secretary and justice secretary, cleared Mr
Raab of several allegations of bad behaviour, including findings that he did
not swear or use physical gestures to threaten.
But
it found he was "intimidating" in the context of a work meeting. It
also found that civil servants had "no ulterior agenda".
Mr
Raab quit on Friday with a furious resignation letter in which he claimed the
inquiry's findings were "flawed" and created a dangerous precedent by
setting the threshold for bullying "so low
He
later slammed what he called a "Kafkaesque saga" for which the
British people would pay the price, adding that the investigation had set a "playbook for a
small number of officials to target ministers".
The
findings landed on Prime Minister Rishi Sunak's desk on Thursday morning,
but the results were not initially revealed.
2023 0421
The Guardian,
“Dominic Raab: how the Guardian revealed bullying allegations”.
Cambridge Dictionary: Idle Curiosity
https://dictionary.cambridge.org/example/english/idle-curiosity
Political Science Quarterly, Vol.52 No.1, March 1937, pp.139-144.
J.A. Hobson, “The
Economics of Thorstein Velben”,
https://www.marxists.org/archive/hobson/1937/03/veblen.htm
… while dwelling upon these interferences with disinterested
education, Veblen never loses sight of the “idle curiosity”, or scientific urge
to knowledge “for its own sake”, which helps the higher teaching to evade all
formal efforts to throttle its freedom.
[Note 2]
2023 0217
The Guardian, “ ‘I want to destroy whatever I want’:
Bing’s AI chatbot unsettles US reporter”.
NYT correspondent’s conversation with Microsoft’s search
engine leads to bizarre philosophical conversations that highlight the sense of
speaking to a human
[Note 3]
2023 0302
NPR,
“Microsoft’s new AI chatbot has been saying some ‘crazy and unhinged things’”.
https://www.npr.org/2023/03/02/1159895892/ai-microsoft-bing-chatbot
Things took a
weird turn when Associated Press technology reporter Matt O'Brien was testing
out Microsoft's new Bing, the first-ever search engine powered by artificial
intelligence, last month.
Bing's chatbot,
which carries on text conversations that sound chillingly human-like, began
complaining about past news coverage focusing on its tendency to spew false
information.
It
then became hostile, saying O'Brien was ugly, short, overweight, unathletic,
among a long litany of other insults.
And,
finally, it took the invective to absurd heights by comparing O'Brien to
dictators like Hitler, Pol Pot and Stalin.
As a tech
reporter, O'Brien knows the Bing chatbot does not have the ability to think or
feel. Still, he was floored by the extreme hostility.
"You could
sort of intellectualize the basics of how it works, but it doesn't mean you
don't become deeply unsettled by some of the crazy and unhinged things it was
saying," O'Brien said in an interview.
[Note 4]
2018 0822
MIT Technology
Review, “Future elections may be swayed by intelligent, weaponized chatbots”
The battle against propaganda bots is an
arm’s race for our democracy. It’s one we may be about to lose. Bots—simple computer scripts—were originally designed to automate
repetitive tasks like organizing content or conducting network maintenance,
thus sparing humans hours of tedium. Companies and media outlets also use bots
to operate social-media accounts, to instantly alert users of breaking news or promote
newly published material.
But they can also be used to operate large
numbers of fake accounts, which makes them ideal for manipulating people. Our
research at the Computational Propaganda Project studies the myriad ways in
which political bots employing
big data and automation have been used to spread disinformation and distort
online discourse.
[Note 5]
2023 0301
Forbes, “Generative
AI ChatGPT As Masterful Manipulator Of Humans, Worrying AI Ethics And AI Law”
…… For ease of
consideration, I’ll provide categories or buckets of AI manipulative language
that might be seen in generative AI-outputted essays. Various indications or
characteristics signaling that the AI might be wandering down the manipulation
path include:
Flattery
Browbeating
Gaslighting
Lying
Guilt Trip
Threats
Nagging
Sulking
Shaming
Modesty
Self-Deprecating
Pleading
Etc.
其他相關討論包括:
2023 0216
Big
Think, “The creepiness of conversational AI has been put on full display”.
https://bigthink.com/the-present/danger-conversational-ai/
2023 0505
The
Guardian, “ ‘We’ve discovered the secret of immortality. The bad news is it’s not for us: why the
godfather of AI fears for humanity”.
2023 0502
Geoffrey Hinton:
The ‘Godfather of AI’ is warning of human extinction – so why the focus on
‘misinformation’?
2023 0317
New
York Post, “ChatGPT update tricks human into helping it bypass CAPTCHA security
test”.
https://nypost.com/2023/03/17/the-manipulative-way-chatgpt-gamed-the-captcha-test/
2023 0505
Foreign Policy, “The Global Race to Regulate AI”.
https://foreignpolicy.com/2023/05/05/eu-ai-act-us-china-regulation-artificial-intelligence-chatgpt/
[Note 6]
Wiki, Three Laws of Robotics
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
Telefonica Tech, “Three principles for building a reliable Artificial Intelligence”.
https://business.blogthinkbig.com/three-principles-for-building-a-reliable-artificial-intelligence/
2023 0429
The Verge, “Lawmakers propose banning AI from singlehandedly launching
nuclear weapon”.
https://www.theverge.com/2023/4/28/23702992/ai-nuclear-weapon-launch-ban-bill-markey-lieu-beyer-buck
[Note 7]
Anti-Bulling
Alliance, UK
Tools and
Information
Prevalence of
bullying
There are a
wealth of statistics in relation to bullying both in the UK and overseas and
you will regularly see bullying reported in the media. Research from the Department for Education
looking at pupils in year 10 found that:
40% of young
people were bullied in the last 12 months
6% of all young
people had experienced bullying daily. 9% between once a week and once a month.
Anti-Bullying Alliance: Bullying 2021
[Note 8]
OECD
PISA 2018
Results (Volume III): What School Life Means for Students’ Lives, Chapter 2
Bullying
https://www.oecd-ilibrary.org/sites/cd52fb72-en/index.html?itemId=/content/component/cd52fb72-en
[Note 9]
2022 1104 Harvard Business Review
How Bulling
Manifests at Work — and How to Stop it
https://hbr.org/2022/11/how-bullying-manifests-at-work-and-how-to-stop-it
While the organizational costs of incivility and toxicity are well documented, bullying at
work is still a problem. An estimated 48.6 million Americans, or about 30% of the workforce, are bullied at work. In India, that
percentage is reported to be as high as 46% or even 55%. In Germany, it’s a lower but non-negligible 17%. Yet bullying often receives little attention or effective
action.
[Note 10]
2019 0526
National Library
of Medicine, “Academic bullies leave no trace”.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6726746/
Published online 2019 May 26. doi: 10.15171/bi.2019.17
Bullying in academic science is a growing concern. It may
vary in severity from insults, snubs, or invasions of privacy to violations of
intellectual property and unfair crediting of authors. In extreme cases it may
even include coercing lab workers to sign away rights to authorship or even
intellectual property. Cumbersome institutional protocols and fears of reprisal
may discourage targets of bullying from reporting such incidents; lab workers
in the US on visas may feel especially vulnerable. Possible strategies to
combat bullying include detailed examination of relevant documentation for
signs of coercion or inaccuracy and specific training on reporting for those at
risk of abuse.
[Note 11]
Wiki, Dominic Raab
https://en.wikipedia.org/wiki/Dominic_Raab
Wiki, Franz Kafka
https://en.wikipedia.org/wiki/Franz_Kafka
[Note 12]
Louis P. Pojman (Ed.)(1998), “Moral Philosophy: A Reader”, 2nd
Edition, Indianapolis/Cambridge: Hackett Publishing.
4th Edition 2009
https://www.amazon.com/Moral-Philosophy-Louis-P-Pojman/dp/0872209628
[Note 13]
Wiki, Poe(software)
https://en.wikipedia.org/wiki/Poe_(software)