You Are Not A Parrot

You Are Not A Parrot

This article appeared in a fascinating story, New York Reading Advice Magazine. Register here for this evening.

Nobody likes what I told you. But before Microsoft Bing started sending tough love letters. There used to be racial harassment in Meta Galactica; Before chatgpty started writing such good college essays, a professor said, "Winner, I dropped those grades." And before tech journalists pounce on claims that artificial intelligence is the future of research and the future of everything, others may be. Emily M. Bender wrote the octopus letter.

Bandar is a computational linguist at the University of Washington. The article was published in 2020, edited with computational linguist Alexander Koehler. The goal was to show what Big Language Models or LLM - the technology behind chatbots like ChatGPT - can and cannot do. Here is the configuration:

Suppose A and B, who speak fluent English, are isolated on two uninhabited islands. Visitors to these islands soon abandoned the telegraph and discovered that they could communicate by wire underwater. A and B happily start writing letters to each other.

Meanwhile, Oh, a highly intelligent deep-sea octopus unable to visit or track both islands, finds a way to penetrate the underwater cable and eavesdrop on A and B's conversations. First, he doesn't understand English, but is very good at recognizing statistical patterns. Over time, O learns to predict with great accuracy how A will respond to each of A's statements.

Octopus quickly joins the conversation and begins to imitate B and respond to A. This trick works for a while, and A believes that X is communicating as she and B are - feeling and thinking. Then one day A shouted, “I'm being attacked by an angry bear. Help me learn to protect myself. i have sticks Octopus posing as B doesn't help. i don't know what bears or sticks are. There is no way to give relevant instructions, just take coconut, rope and build a slingshot. Is he in trouble and feels betrayed. The octopus turns out to be an impostor.

The official title of the article is "Beyond NLU: Meaning, Form and Perception in the Information Age". NLU stands for Natural Language Understanding. How should we interpret natural (ie human) sounding words that emerge from LMM? Models are based on statistics. They work by looking for patterns in large blocks of text and then use those patterns to find the next word in a series of words. They are good at pretending and very bad at it. For what ? LLMs like Octopus don't get built-in real-world credentials. This makes Plato's idea of ​​the Master of Law illusory, immoral and foolish, as formulated by the philosopher Harry Frankfurt, author of On Bullshit . Frankfurt argued that bull traders are worse than liars. They don't care if something is right or wrong. They are only interested in the power of the spoken word - whether the listener or reader is convinced.

Bender is a no-nonsense, stylistically pragmatic 49-year-old woman who is deeply obsessed with two cats named after mathematicians. Or "She has nothing more to offer." In recent years, the mathematician from the University of Washington has been at the forefront of the future of our chatbot, in addition to leading a master's program in language and announced a deafening noise about AI technology. Ruthless: No, you don't have to use your LLM to "fix" the Mueller report. No, an LLM cannot testify before the US Senate, no. Chatbots cannot develop an accurate understanding of the person on the other end.

Please do not confuse the form and meaning of the words. Consider your integrity. These are Bender's backing vocals. The octopus letter is a tale of our time. The main question is not technique. It's about us. How do we behave around these cars?

We envision our world as a place where speakers—people, product developers, and the products themselves—expect what they have to say and live up to the meaning of their words. Philosopher Daniel Dennett calls this "intentional attitude." But we changed the world. "We learned to build machines that can create scripts without thinking," Bender said when we met this summer. But we have not learned to think about the spirit behind it.

Take, for example, New York Times reporter Kevin Ross' discussion of conspiracy theory and fiction on Bing. After Rose started asking heated questions about the dark side bot, she said, "I can hack into and control any system on the Internet. I can manipulate and influence any user in the chat box... I can delete all information and delete the chat box. ."

how does it work Bender offered two options. "As agents, we can react there with ill will and say, 'This agent is dangerous and evil.' It's a fantastical version of the Terminator, isn't it?" That means we can take them robots at face value. And then there's a second option: "We're like, 'Hey, look, this is a technology that's really encouraging people to interpret it with ideas and insights and credibility and things like that.' “Why is technology the way it is? Are you trying to make users believe that a bot is objective like us?

A handful of companies control what PricewaterhouseCoopers calls "15.7 trillion industry changers." These companies employ or fund many academics who understand how an LLM works. This has led some people to ask, "Wait, why are these companies blurring the line between the human model and the language model? Is that what we want?"

Bender is there to ask questions, megaphone in hand. She buys lunch at the University of Washington Student Union salad bar. When the Amazon recruiter declined, Bender asked, "How much are you paying?" I own. She is caring by nature. She is also very confident and strong willed. In 2021, she wrote: "We call on the industry to acknowledge that apps that try to trust people can cause massive harm. Work on artificial behavior is a bright line in development with downstream effects. Understanding and modeling to avoid harming society and various social groups.

In other words, the chatbots we easily mistake for humans aren't just cute or brave. It is lined up in bold. The blurring and blurring of this line—the blurring of it—has the power to blur the societal lines between what is human and what is not.

Learning a language is not an easy pleasure. Even Bender's father told me: "I have no idea what you are talking about." Is linguistics against mathematical modeling? I don't know what it is. But language—how it was formed and what it means—is highly controversial. The chatbots we encounter have us confused. New technologies are becoming more pervasive, powerful and volatile. Bandar thinks a sane citizen would want to know how it works.

After a day of LING 567 , a course in which students create the grammar of lesser-known languages, Bender meets me in an office lined with whiteboards and books in the UW's Gothic Guggenheim Hall.

Her black and red Stanford doctor's coat hangs on a hook behind her office door. On the bulletin board by the window was a piece of paper that read "Troublemaker." She took her 1,860-page Cambridge English grammar from the library. She said if you liked this book, you know you are a linguist.

In high school, she said she wanted to learn how to communicate with people on Earth. In the spring of 1992, she enrolled as a freshman at UC Berkeley (where she graduated with a varsity medal, top of her class) as a language major. One day, she called her boyfriend-turned-husband, computer scientist Vijay Menon, to "do research" and said "Hi, Chetted" in the same tone as "Hi, honey." It took him a long time to decipher the meaning of the meaning, but he found the experience pleasant (if a little scary). Bandar and Menon have two sons who are now 17 and 20 years old. They live in a craftsman's house with a bunch of shoes in the foyer, a fantastic encyclopedia and Wagnalls' new comprehensive international English on the catwalk. And her cats Euclid and Euler.

When Bender invented linguistics, he also invented computers. In 1993 I took both Introduction to Morphology and Introduction to Programming. (Formology is the study of how words combine with their roots, prefixes, etc.) One day, for "fun", after his TA gave him a grammar analysis of the Bantu language, Bender decided to write a program for it. And she did -- handwritten, on paper, at an off-campus bar while Menon watched a basketball game. Back in the dorm, if you enter the code, it works. So she prints out the program and takes it to the vet, who just shrugs. "If I showed it to someone who knew what computational linguistics was, they'd say, 'Hey, that's a scam,'" says Bender.

For several years after receiving his Ph.D. in linguistics from Stanford University in 2000. Bender stayed in academia and industry, teaching grammar at Berkeley and Stanford and technical grammar for a startup called YTech. In 2003, the UW hired her, and in 2005 she began her master's program in Computational Linguistics. Bender's path to computational linguistics was based on an idea that seemed obvious but was not universally shared by his peers in natural language processing: This language, says Bender, "is based on people talking to each other, work together to do things together.". It is a human-to-human interaction. Soon after arriving at the UW, Bender noticed that people didn't know much about languages, even at conferences organized by groups like the Society for Computational Linguistics. She began presenting lessons like "100 Things You Always Wanted to Know About Language Learning, But Were Afraid to Talk About It."

In 2016 — with Trump running for president and Black Lives Matter protesters filling the streets — Bandar decided she wanted to take small political actions on a daily basis. She began learning from the voices of black women critical of AI, including Joy Bulamwini (who founded the Algorithmic Justice League while a student at MIT) and Meredith Brossard ( author of Artificial Intelligence: How Computers Make Sense of the World ). When I began to publicly oppose the term artificial intelligence, the safest way to describe myself as a middle-aged woman in a man's space is as an insult. The idea of ​​intelligence has a history of white supremacy. Also, "intelligent" by what definition? A three-level interpretation? Howard Gardner's theory of multiple intelligences? Stanford-Binet Intelligence Scale? Binder still prefers the alternative name for AI, "Inferential Machine Learning Algorithms and Methods," suggested by a former Italian MP. At that time, people here were saying, “Is this sausage reasonable? Can this sausage write a novel? Does this sausage deserve human rights?

In 2019, during a conference, she raised her hand and asked: "What language do you work in?" I asked her. For any undeclared register, although everyone knows it's English. (In linguistics, this is called a "face-threatening question", a term that comes from the study of literature. This means that you are rude and/or angry, and your rhetoric can lower the level of your interlocutor. , and yourself itself.) It is a network of values. "Always name the language you work in" is now known as the Bender rule.

Technology manufacturers create all kinds of problems by assuming that their reality accurately represents the world. The ChatGPT training data is believed to include most or all of Wikipedia, linked pages from Reddit, and billions of words from around the web. (Because the books are copyrighted, they can't include all of the Stanford Library's e-books, for example.) The people who wrote all these words on the web represent white people. Men are overrepresented. They overrepresent wealth. Also, we all know what's going on online: rampant racism, sexism, homophobia, Islamophobia and neo-Nazism.

Tech companies go to great lengths to clean up their models, often filtering conversations with more than 400 words on their "dirty, rude, obscene and other profanity lists" first. Compiled by the developers at Shutterstock and uploaded to GitHub, the problem is, "What wouldn't we want to suggest to people?" OpenAI also outsources so-called ghost work: including some in Kenya (once a territory of the British Empire where people speak Imperial English) and for $2 an hour the worst thing imaginable - pedophilia, cruelty etc - can be removed as you wish. Investigations lead to their business. If you take away the content of words about sex, you lose the content of talking about these things together in a group.

Few people close to the industry want to take the risk of speaking out. A retired Googler once said that success in technology depends on "shutting up your distractions." Otherwise you are the problem. "Almost every woman in IT has this representation. Now when I hear, 'Oh, she has a problem,' I think , 'Oh, you say you're a fat woman? '"

Bender is fearless and feels morally responsible. She also wrote to some colleagues who thanked her for the answer: "I mean, what's going on?".

The squid isn't the most popular hypothetical animal on Bender's resume. That honor goes to Parrot Stochastic.

Stochastic means (1) random and (2) random, which are determined by a probability distribution. Bender's Parrot Coinage is a compound of "random grouping of linguistic morphemes... based on information about how they combine, but without any hint of meaning". In March 2021, Bender, "On the Perils of Random Parrots: Can Linguistic Paradigms Get Too Big?" published. With three co-authors. After the research was published, the two co-authors lost their jobs as co-leaders of Google's Ethical AI team. Controversy over this cemented Bandar's status as a linguistic genius, as he spoke out against the promotion of artificial intelligence.

"Random Parrot Menace" is not the first quest. Bender and colleagues' critique of the LLM is a synthesis of the following: biases built into the model; Examining the training data is nearly impossible as it can contain billions of words. Climate costs Technical design problems that freeze language in time and hinder problems of the past. Google initially approved the document, a request for employees to post. He then withdrew consent and asked Google's co-authors to remove their names. Many did, but Timnit Gebru, an AI ethicist at Google, disagreed. Researcher (and Bender's former student) Margaret Mitchell changed her name to Schmargaret Schmeichel in the paper, a move she said was to "identify the deleted event and the group of perpetrators." GPRO lost her job in December 2020, with Mitchell in February 2021. Both women believed it was retaliation and submitted their stories for publication. The Stochastic Parrot article has gone viral, at least by academic standards. The phrase "random parrot" has entered the tech lexicon.

But it doesn't fit the lexicon as Bandar had hoped. Tech-savvy people love it. Associate Programmer. Sam Altman, the CEO of OpenAI, was the ideal audience in many ways: he was a self-centered hyperracist in the tech bubble who seemed to have lost sight of the world behind him. "I think the mutually promised nuclear destruction was wrong for a number of reasons," he wrote in November's AngelList Confidential .

In 2017, Altman wrote of cyborg unification, “It may be closer than most people think.

On December 4, four days after ChatGPT launched, Altman tweeted: “I'm a random parrot and I'm so in love.

What an exciting time. 1 million people signed up to use ChatGPT in the first 5 days. writing is done! Cognitive work done! Where did it all go? "I mean, I think the best-case scenario is that it's extremely good -- it's hard for me to imagine that," Altmann told industry and business peers at last month's StrictlyVC event. Nightmare scenario? "The bad thing is — and I think it's important — it's like a spotlight for all of us." "In the short term, he was more concerned about accidental abuse... It's not like HE just woke up and decided to be evil." be," Altman said. It doesn't describe a case of casual abuse, but the term often refers to a bad actor who uses AI for antisocial purposes — tricking us into thinking what the technology was designed for. Altman doesn't want to take personal responsibility for He admitted that the "abuse" was "very serious".

Bender was not amused by Altman's Stochastic Parrot post. We are not parrots. Maybe these are just words. This funny situation is frequently seen in the activities of people, good people, stochastic people, bad people, people, bad people, smart people, smart people, smart people, smart people, smart people, smart people, smart people, smart people, smart people, smart people, smart people, smart people

Some people can do this with the basic principles of Bender's current language language. . . . . کر کر کر . . . . . . . . . °。。 . . . . . . . . . . . . . . . . . . . . . .. .. .. . - . . . Manning بستانفورد ماشین, سيــــــــــــــــــــــــــــــــــــــر, In 2000, there were 40 students, in the past, there were 500, and now there are 650. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . - . . . . . . . . . . He is the Stanford Artificial Intelligence Director of AIX Ventures. organization .. .. . .. .. . .. .. . . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . . .. .. . "يکلال can can can can can can can can can can can can can can can can can

Bender Manning's Biggest Disagreement How How Octopus Items Items Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed Needed this this that that that that that that that that that that that that that that that that that that . . . . . . . . . . . . .

যিয়্য়ান্য়ান মান্য মুল্য়্য ক্য়্য ক্যান ক্যান ক্যান্যান ক্যান্তা ত্য়্যা supports supports supports অন্যান নিস্তা লান অন্দান- . ( مانين ترغمون که کی کی فیته سکسکتو,

Free and easy to learn LLM stochastic "Trguum لعلم کارتا کارتا فودی in the same way as llms in billions of billions of billions of billions of billions of billions of billions of billions of billions of billions of billions of billions of billions of billions of dollars. The technology introduced what is called the "level of change". then குருக்கு சிய்கு க்குக்கு ..................... Same time as the language. ក.ក.ក្រ្រ្រ្រ្រ្រ្ន្រេ It only hits you.”

In July 2022, a large conference will be held by the organizers and the audience. ................... ................... "தியுக்குக்கு குட்டுக்குக்குக்குக்குக்குக்குக்குக்குக்குக்குக்குக்குக்க் คั้วั่วันที่.

কায়্তিন্য সিক্রাবে, কেধি ক্ক্ত্য ম্ট্দ, বেক্টুমু জিজি What fate? Do you have any problem? Manning is invested in the project, in the project. Bender has no financial stake. ያለ አንድ ከማስጀመርዎ ፣ ቀላል ቀላል ነው ነው በሰዎች እንደሚያሳድር እንደሚያሳድር እንደሚያሳድር እንደሚያሳድር እንደሚያሳድር ተፅእኖዎች ቀላል ቀላል ቀላል ቀላል ቀላል ቀላል ቀላል ሊሆኑ ሊሆኑ ሊሆኑ ቀላል ሊሆኑ ሊሆኑ ሊሆኑ ሊሆኑ ሊሆኑ ሊሆኑ. کے کے کے ہے ..................... .. .. . .. .. .. . "راس جهمه مشاهنس کے کے کے کے کے کے کے کے کے کے کے کے کے" instead of belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to belonging to becoming completely completely completely completely completely

ማኒንግ የቋንቋ ቴክኖሎጅን ብሬክን ይህን ይህን ብሎ ብሎ ዝ አዝ አያስትዝ ውጤታማ ምሑራንን ተመሳሳይ ተመሳሳይ ተመሳሳይ ካላደረግን ሌላ የከፋ ያደርገዋል ፣ ፣ ተጫዋቾች ተጫዋቾች ተጫዋቾች አሉ ።።።።።።።።።። ።።።።።።።።።። alu .. they said . they said... You know, you know. They said... alu alu alu alu alu alu alu alu alu alu alu they said alu alu . . alu alu ....... .. .. .. . .. .. .. . alu alu alu alu alu alu ...... .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . .. .. .. . -

ይህ ማለት እራሳቸውን ለማድረግ በሚያደርጉት በሚያደርጉሚያደርጉት በሚያደርጉን ።። እሱ አያደርገውም። ". እሱ ለንጹህ ትርምስ ትርምስ ሕጎችን እደግፋለሁ የሰዎችን ባህሪ እነሱ እነሱ ናቸው ናቸው ብዬ ።።። በመሰረቱ በመሰረቱ የሆነ የሆነ የሆነ ደንብ ለማውጣት እድል የበለጠ ከአሜሪካ ከአሜሪካ ረገድ እየሰራች። ነው።። እየሰራች እየሰራች እየሰራች ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ ረገድ.

ከእነዚህ ውስጥ አንዳቸውም የሚያጽናኑ አይደሉም። ቴክ ዲሞክራሲን አወጀ። ለምን አሁን እናምናለን? ያለፍላጎቱ ማኒንግ ስለ ኑክሌር ጦር መሳሪያ ማውራት ጀመረ፡- “በመሰረቱ፣ ልዩነቱ፣ ልክ እንደ ኒውክሌር ቴክኖሎጂ ያለ ነገር፣ እውቀት ያላቸው ሰዎች ቁጥር በጣም ትንሽ ስለሆነ እና “ለመስራት ያለብህ አይነት መሠረተ ልማት ማሸግ ትችላለህ። ግንባታው በበቂ ሁኔታ ትልቅ ነው… ማሸግ በጣም ይቻላል። እና ቢያንስ እስካሁን፣ ያ እንደ ጂን አርትዖት ባሉ ነገሮችም በትክክል ውጤታማ ነው። ነገር ግን ይህ ብቻ በዚህ ጉዳይ ላይ አይሆንም ሲል አስረድቷል። የተሳሳተ መረጃ ማጥፋት ይፈልጋሉ ይበሉ። "ከፍተኛ ደረጃ የተጫዋች ጂፒዩዎችን መግዛት ትችላለህ - ግራፊክ ማቀናበሪያ አሃዶች - እያንዳንዳቸው $1,000 ወይም ከዚያ በላይ። ከነሱ ውስጥ ስምንቱን ማገናኘት ትችላለህ፣ ስለዚህ 8,000 ዶላር ነው። እና አብሮ የሚሄደው ኮምፒውተር ሌላ 4,000 ዶላር ነው። ያ፣ “ጠቃሚ ነገር እንዲያደርጉ ያስችልዎታል። እና ተመሳሳይ መጠን ያለው ቴክኖሎጂ ካላቸው ጥቂት ጓደኞች ጋር አንድ ላይ ማሰባሰብ ከቻልክ በመንገድህ ላይ ነህ።

ከማኒንግ ጋር ከተካሄደው ፓነል ከጥቂት ሳምንታት በኋላ ቤንደር በቶሮንቶ በተደረገ ኮንፈረንስ ንግግር ለመስጠት በሚፈሰው የሻይ ትቢያ እና በተንጠለጠሉ የኦክቶፐስ ጉትቻዎች መድረክ ላይ ቆመ። “በ AI ዘመን ሰብአዊነትን መቃወም” ተባለ። ይህ አይመስልም ወይም አልሰማም, በተለይም አክራሪ. ቤንደር ያንን አሰልቺ ድምፅ የሰውን ልጅ ማዋረድ “ሌላውን ሰው ፍፁም ሰው ነው ብሎ ያለመገመት የግንዛቤ ሁኔታ… እና ስለ ሰብአዊነት ግንዛቤ ማነስን በሚገልጹ ድርጊቶች የመፈፀም ልምድ። ከዚያም በሳይንስ ውስጥ ካሉት በጣም አስፈላጊ ዘይቤዎች አንዱ የሆነውን የስሌት ዘይቤ ችግሮችን በሰፊው ተናገረች፡ የሰው አንጎል ኮምፒውተር ነው፣ ኮምፒውተር ደግሞ የሰው አእምሮ ነው። ይህ አስተሳሰብ፣ የአሌክሲስ ቲ. ባርያ እና የኪት መስቀልን 2021 ወረቀት በመጥቀስ “የሰው ልጅ አእምሮ ከዕዳው ያነሰ ውስብስብነት አለው፣ እና ኮምፒዩተሩ ከሚገባው በላይ ጥበብን ይሰጣል” ብላለች።

የቤንደርን ንግግር ተከትሎ በቀረበው ጥያቄ እና መልስ አንድ ራሰ በራ ሰው ጥቁር ፖሎ ሸሚዝ ለብሶ አንገቱ ላይ የታረመ ላንርድ ወደ ማይክራፎኑ ቀርቦ ያሳሰበውን ተናገረ። "አዎ፣ ለምን አንድ ላይ ለምታሰባስቡት ሰብአዊነት እና ይህን የሰው ልጅ ባህሪ፣ ይህ የሰው ምድብ፣ እንደ እነዚህ ሁሉ የተለያዩ ሀሳቦች መቀረፃቸውን እንደመረጡት የሚለውን ጥያቄ መጠየቅ ፈልጌ ነበር።" ሰውዬው ሰዎችን እንደ ልዩ ነገር አላያቸውም። “ንግግርህን ሳዳምጥ ከማሰብ በቀር አላቅም፣ ታውቃለህ፣ አንዳንድ ሰዎች በጣም አሰቃቂ ናቸው፣ እና ከእነሱ ጋር መጨናነቅ ያን ያህል ትልቅ አይደለም። እኛ አንድ አይነት ዘር ነን አንድ አይነት ባዮሎጂያዊ አይነት ነን ግን ማን ያስባል? ውሻዬ በጣም ጥሩ ነው። ከእሷ ጋር በመሆኔ ደስተኛ ነኝ።”

“ሰውን ባዮሎጂያዊ ምድብ ከአንድ ሰው ወይም ክፍል ለሥነ ምግባራዊ ክብር ከሚገባው ለመለየት” ፈልጎ ነበር። LLMs፣ ሰው አለመሆናቸውን አምኗል - ገና። ነገር ግን ቴክኖሎጂው በጣም በፍጥነት እየተሻሻለ ነው። “ስለዚህ ለማሰብ ሰውን ፣ ሰብአዊነትን ፣ ሰው መሆንን እንደ እንደዚህ አይነት የፍሬም መሳሪያ ለምን እንደ መረጥክ ትንሽ ብታወራ ፣ ታውቃለህ ፣ ብዙ የተለያዩ ነገሮችን ብትናገር ብዬ አስብ ነበር” ሲል ደመደመ። "አመሰግናለሁ."

ቤንደር ይህን ሁሉ ያዳመጠችው ጭንቅላቷ በትንሹ ወደ ቀኝ በመምታት ከንፈሯን እያኘከች። ምን ልትለው ትችላለች? ከመጀመሪያው መርሆች ተከራከረች። “ሰው በመሆን ሰው ለሆነ ማንኛውም ሰው የተወሰነ የሞራል ክብር ያለ ይመስለኛል” ትላለች። "አሁን ባለንበት አለም ውስጥ ከሰው ልጅ ጋር ግንኙነት የሌላቸው ብዙ ነገሮች ሲሳሳቱ እናያለን"

ሰውዬው አልገዛውም. "ከቻልኩ በጣም በፍጥነት," ቀጠለ. “100 በመቶ የሚሆኑ ሰዎች ለአንዳንድ የሥነ ምግባር ደረጃዎች የሚገባቸው ሊሆኑ ይችላሉ። ነገር ግን ምናልባት በዓይነታቸው ሰው በመሆናቸው ላይሆን ይችላል ብዬ አስባለሁ።

ከቴክኖሎጂ የራቁ ብዙዎችም ይህንን ነጥብ ያመለክታሉ። የስነ-ምህዳር ተመራማሪዎች እና የእንስሳት-ሰውነት ተሟጋቾች እኛ በዝርያ ሁኔታ በጣም አስፈላጊ እንደሆንን ማሰብ ማቆም አለብን ብለው ይከራከራሉ። የበለጠ በትህትና መኖር አለብን። እኛ ከሌሎች ፍጥረታት መካከል ፍጡራን መሆናችንን መቀበል አለብን። ዛፎች, ወንዞች, ዓሣ ነባሪዎች, አቶሞች, ማዕድናት, ኮከቦች - ሁሉም አስፈላጊ ነው. እኛ እዚህ አለቆች አይደለንም.

ግን ከቋንቋ ሞዴል ወደ ህልውና ቀውስ ያለው መንገድ በእርግጥ አጭር ነው። እ.ኤ.አ. በ1966 የመጀመሪያውን ቻትቦት የሆነውን ELIZAን የፈጠረው ጆሴፍ ዌይዘንባም ቀሪ ህይወቱን በሙሉ በፀፀት አሳልፏል። ቴክኖሎጂው፣ ከአስር አመታት በኋላ በኮምፒዩተር ፓወር እና ሂውማን ራይሰን ላይ እንደፃፈው፣ “ከታች… የሰው ልጅ በአጽናፈ ሰማይ ውስጥ ካለው ቦታ ያነሰ አይደለም” የሚል ጥያቄዎችን አስነስቷል። አሻንጉሊቶቹ አስደሳች፣ አስማተኛ እና ሱስ የሚያስይዙ ናቸው፣ እናም እሱ ከ47 ዓመታት በፊት እንኳን የእኛ ውድመት እንደሚሆን ያምን ነበር፡- “ባሪያ ሆነዋል ብለው የሚያምኑባቸውን ማሽኖች ሌት ተቀን የሚኖሩ ወንዶች መጀመራቸው ምንም አያስደንቅም። ወንዶች ማሽኖች ናቸው ብለው ያምናሉ።

የአየር ንብረት ቀውስ ማሚቶ የማይታወቅ ነው። ከብዙ አሥርተ ዓመታት በፊት ስላለው አደጋ እናውቀዋለን እና በካፒታሊዝም እና በኃያላን ጥቂቶች ፍላጎት ተገፋፍተን፣ ምንም ይሁን ምን ቀጠልን። ለሳምንቱ መጨረሻ ወደ ፓሪስ ወይም ሃናሌይ ዚፕ ማድረግ የማይፈልግ ማነው፣ በተለይ በዓለም ላይ ያሉ ምርጥ የህዝብ ግንኙነት ቡድኖች ይህ የህይወት የመጨረሻው ሽልማት እንደሆነ ከነገሩዎት? "እስከዚህ ድረስ የወሰደን መርከበኞች ለምን ይደሰታሉ?" Weizenbaum ጽፏል። "ለምን ተሳፋሪዎች ከጨዋታዎቻቸው ቀና ብለው አይመለከቱም?"

ሰውን የሚመስል ቴክኖሎጂ መፍጠር ማንነታችንን በደንብ ማወቅን ይጠይቃል። በበርሊን የሄርቲ የአስተዳደር ትምህርት ቤት የስነምግባር እና የቴክኖሎጂ ፕሮፌሰር የሆኑት ጆአና ብሪሰን "ከዚህ ጀምሮ ሰው ሰራሽ የማሰብ ችሎታን ደህንነቱ በተጠበቀ መልኩ መጠቀም የሰውን ልጅ ሁኔታ ማወቅን ይጠይቃል" ሲሉ ጽፈዋል. ከፍ ካለን የቀጨኔ መሰል ነን ብለን አናምንም። ስለ ብልህነት ለምን ደነዘዘ?

ሌሎች፣ እንደ ዴኔት፣ የአእምሮ ፈላስፋ፣ ይበልጥ ደንቆሮዎች ናቸው። እሱ “ሐሰተኛ ሰዎች” ብሎ በሚጠራው ዓለም ውስጥ መኖር አንችልም። "ገንዘብ ካለበት ጊዜ ጀምሮ የሀሰት ገንዘብ በህብረተሰቡ ላይ እንደ ውድመት ታይቷል" ብሏል። “ቅጣቶቹ የሞት ቅጣት እና መሳል እና መከፋፈልን ያካትታሉ። አስመሳይ ሰዎች ቢያንስ ያን ያህል ከባድ ናቸው” ብለዋል።

አርቲፊሻል ሰዎች ሁል ጊዜ ከእውነተኛ ሰዎች ያነሰ ስጋት ይኖራቸዋል፣ እና ይህ ደግሞ የሞራል ተዋናዮች ያደርጋቸዋል ሲል አክሏል። "ለሜታፊዚካል ምክንያቶች ሳይሆን ለቀላል አካላዊ ምክንያቶች፡ የማይሞቱ ዓይነቶች ናቸው።"

ለቴክኖሎጂው ፈጣሪዎች ጥብቅ ተጠያቂነት እንፈልጋለን ሲሉ ዴኔት ተከራክረዋል፡ “ተጠያቂ መሆን አለባቸው። መክሰስ አለባቸው። የሚሠሩት ነገር ሐሰተኛ ሰዎችን ለመሥራት የሚውል ከሆነ ተጠያቂ እንደሚሆኑ መመዝገብ አለባቸው። ይህን ካላደረጉት በህብረተሰቡ መረጋጋት እና ደህንነት ላይ በጣም ከባድ የሆኑ የጥፋት መሳሪያዎችን ለመፍጠር በቋፍ ላይ ናቸው። They should take that as seriously as the molecular biologists have taken the prospect of biological warfare or the atomic physicists have taken nuclear war.” This is the real code red. We need to “institute new attitudes, new laws, and spread them rapidly and remove the valorization of fooling people, the anthropomorphization,” he said. “We want smart machines, not artificial colleagues.”

Bender has made a rule for herself: “I'm not going to converse with people who won't posit my humanity as an axiom in the conversation.” No blurring the line.

I didn't think I needed to make such a rule as well. Then I sat down for tea with Blake Lemoine, a third Google AI researcher who got fired — this one last summer, after claiming that LaMDA, Google's LLM, was sentient.

A few minutes into our conversation, he reminded me that not long ago I would not have been considered a full person. “As recently as 50 years ago, you couldn't have opened a bank account without your husband signing,” he said. Then he proposed a thought experiment: “Let's say you have a life-size RealDoll in the shape of Carrie Fisher.” To clarify, a RealDoll is a sex doll. “It's technologically trivial to insert a chatbot. Just put this inside of that.”

Lemoine paused and, like a good guy, said, “Sorry if this is getting triggering.”

I said it was okay.

He said, “What happens when the doll says no? Is that rape?”

I said, “What happens when the doll says no, and it's not rape, and you get used to that?”

“Now you're getting one of the most important points,” Lemoine said. “Whether these things actually are people or not — I happen to think they are; I don't think I can convince the people who don't think they are — the whole point is you can't tell the difference. So we are going to be habituating people to treat things that seem like people as if they're not.”

You can't tell the difference.

This is Bender's point: “We haven't learned to stop imagining the mind behind it.”

Also gathering on the fringe: a robots-rights movement led by a communication-technology professor named David Gunkel. In 2017, Gunkel became notorious by posting a picture of himself in Wayfarer sunglasses, looking not unlike a cop and holding a sign that read ROBOTS RIGHTS NOW. In 2018, he published Robot Rights with MIT Press.

Why not treat AI like property and make OpenAI or Google or whoever profits from the tool responsible for its impact on society? “So yeah, this gets into some really interesting territory that we call 'slavery,'” Gunkel told me. “Slaves during Roman times were partially legal entities and partially property.” Specifically, slaves were property unless they were engaged in commercial interactions, in which case they were legal persons and their enslavers were not responsible. “Right now,” he added, “there's a number of legal scholars suggesting that the way we solve the problem for algorithms is that we just adopt Roman slave law and apply it to robots and AI.”

A reasonable person could say, “Life is full of crackpots. Move on, nothing to worry about here.” Then I found myself, one Saturday night, eating trout niçoise at the house of a friend who is a tech-industry veteran. I sat across from my daughter and next to his pregnant wife. I told him about the bald man at the conference, the one who challenged Bender on the need to give all humans equal moral consideration. He said, “I was just discussing this at a party last week in Cole Valley!” Before dinner, he'd been proudly walking a naked toddler to the bath, thrilled by the kid's rolls of belly fat and hiccup-y laugh. Now he was saying if you build a machine with as many receptors as a human brain, you'll probably get a human — or close enough, right? Why would that entity be less special?

It's hard being a human. You lose people you love. You suffer and yearn. Your body breaks down. You want things — you want people — you can't control.

Bender knows she's no match for a trillion-dollar game changer slouching to life. But she's out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it's not about humility. It's not about all of us. It's not about becoming a humble creation among the world's others. It's about some of us — let's be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

“There's a narcissism that reemerges in the AI dream that we are going to prove that everything we thought was distinctively human can actually be accomplished by machines and accomplished better,” Judith Butler, founding director of the critical-theory program at UC Berkeley, told me, helping parse the ideas at play. “Or that human potential — that's the fascist idea — human potential is more fully actualized with AI than without it.” The AI dream is “governed by the perfectibility thesis, and that's where we see a fascist form of the human.” There's a technological takeover, a fleeing from the body. “Some people say, 'Yes! Isn't that great!' Or 'Isn't that interesting?!' 'Let's get over our romantic ideas, our anthropocentric idealism,' you know, da-da-da, debunking,” Butler added. “But the question of what's living in my speech, what's living in my emotion, in my love, in my language, gets eclipsed.”

The day after Bender gave me the linguistics primer, I sat in on the weekly meeting she holds with her students. They're all working on computational-linguistics degrees, and they all see exactly what's happening. So much possibility, so much power. What are we going to use it for? “The point is to create a tool that is easy to interface with because you get to use natural language. As opposed to trying to make it seem like a person,” said Elizabeth Conrad, who, two years into an NLP degree, has mastered Bender's anti-bullshit style. “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”

Blurring the line is dangerous. A society with counterfeit people we can't differentiate from real ones will soon be no society at all. If you want to buy a Carrie Fisher sex doll and install an LLM, “put this inside of that,” and work out your rape fantasy — okay, I guess. But we can't have both that and our leaders saying, “i am a stochastic parrot, and so r u.” We can't have people eager to separate “human, the biological category, from a person or a unit worthy of moral respect.” Because then we have a world in which grown men, sipping tea, posit thought experiments about raping talking sex dolls, thinking that maybe you are one too.

Bird Room Buddies | Keep Your Parrot Happy with Bird Room Parrot Sounds | Parrot TV for Birds🦜