Kaitlyn Barnes, Staff Writer
Language is at the center of everything human. Language fuels thought and subsequently culture. The same can be said about AI. At its core, AI is a “large language model.” However, advertisers lead us to believe that AI is “human-like” and “intelligent.” These beliefs may solely be at the hands of marketers who promote emotional consumption habits. Authors Emily M. Bender and Anastsia Berg try to make these arguments abundantly clear to us.
In the article, “Why Even Basic A.I. Use Is So Bad for Students,” published as a guest essay in The New York Times, author Anastsia Berg attempts to encapsulate how detrimental AI use is for students’ education. However, what she actually dictates is how language impacts not only students’ lexicon abilities, but also how the masses interpret AI. This idea is more formally introduced in the articles, “We Need to Talk About How We Talk About AI,” and “We Do Not Have to Accept AI (much less GenAI) as Inevitable in Education,” by Emily M. Bender. In both articles, Bender succinctly and concisely elaborates on Berg’s concepts. Neither author believes AI is “inevitable”; however, both are worried about the implications of “misleading language.”
“Cognitive fluency,” “linguistic capacities” and “functional literacy” are worries of philosopher Anastsia Berg. She includes buzzwords multiple times throughout the article. Large psychology terms are often used in AI papers to invoke emotion. The goal is to convince us that AI will somehow turn our brains to mush. The reader doesn’t exactly know what “cognitive fluency” is, but when she uses this phrase in the sentence, “At stake are not just specialized academic skills or refined habits of mind but also the most basic form of cognitive fluency,” it sounds scary. However, she never gives the definition of “cognitive fluency.” So, the reader has to guess what it means and somehow relate it to her claim: AI is the degradation of education.
What Berg is trying to argue may or may not be accurate, but the language is compelling. It pushes her paper in a direction where language is the root of degradation, not AI. She states, “using language is not a skill like any other… Philosophers have disputed whether beings could exist that could think despite lacking language, but it is clear that humans cannot do so.” This idea more fully encompasses her thoughts on AI. Students develop a vast lexicon through education, yet many are still tricked by AI marketing campaigns. Berg’s paper is a great place to start a conversation surrounding AI. She brings up many thoughtful points, however, she lacks elaboration and definition on how language propels AI.
Where we leave off with Berg, we pick up with Bender. The idea introduced is that “anthropomorphizing language influences how people perceive a system,” from the article “We Need to Talk About How We Talk About AI.” This is the second article in a series by Bender discussing AI in a different way than many authors at this time. Many articles written about AI in recent years have been on a strangely extreme scale. Either authors write about how AI is going to destroy the world, or it is the best thing since sliced bread. However, Bender does not pick a side, making her article different from Berg’s. Berg thinks she is arguing AI is the detriment of education, whereas Bender is expressing that in order to use AI one must understand it.
The idea that connects these papers aren’t the intended arguments, but the language used. Where Berg lacks explanation Bender provides it, specifically when defining AI. As previously stated, AI is a large language model; therefore, “What large language models are designed to do is mimic the way that people use language.” Language is so important to human life, without it we would be nowhere. AI is a coded program that acts like it knows what humans sound like. However, that does not give it the human-like ability to deeply understand language. “Framing systems as humans or human-like is misleading at best, deadly at worst.”
AI conversations are difficult to have. Many people are unwilling to hear negative conjecture about it because AI makes their lives easier. However, maybe the conversations we’re having are not the correct ones. When discussing AI, conversations need to be more about how technology and language market AI to the masses. Extreme language like what is found in Berg’s article is not helpful because it prompts emotion instead of facts. That is why Benders articles are so revolutionary for this era of automation. “It is critical that educators and leaders of education systems bring a critical eye and skeptical attitude towards the sales pitches from AI companies.” Conversation breeds new ideas and can create change. It is important to have tough conversations in this age of uncertainty. The only way we can do this is through language.

