17 Quotes from Articles Comparing Generative AI to the Use of Calculators in Classrooms

Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion? – The Atlantic

There's a natural progression. New tools like the calculator, like Grammarly and editing tools that came out a number of years ago that made all of our writing better, including mine, right? Those are things that are just going to keep on coming. And, we can't stop them from coming, but it's up to us to decide how to integrate them appropriately. – ABC News

As math professors once had to adjust their math teaching in the presence of calculators, writing instructors may need to adjust their teaching in the presence of AI tools. “It would be like micromanaging the use of calculators in in a math class,” Underwood said. “If you’re doing that, it’s a sign that you’re not you’re not taking the opportunity to teach them more advanced math that would actually help them.” – Inside Higher Ed

The question before us is how we can productively use ChatGPT to help our students become knowledge transformers?  A writer, a teacher, and an education professor all suggest an analogy from the calculator and math to ChatGPT and writing. In the same way that calculators became an important tool for students in math classes, ChatGPT has potential to become an important tool for writers who want to hone their critical thinking skills along with their communication skills. – Brookings

Much as Google devalued the steel-trap memory, electronic calculators speeded up complex calculations, Wikipedia displaced the printed encyclopedia and online databases diminished the importance of a vast physical library, so, too, platforms like ChatGPT will profoundly alter the most prized skills. According to Chamorro-Premuzic, the skills that will be most in demand will be the ability to: Know what questions to ask. – Inside Higher Ed

It reminds him of what his mother, a high-school math teacher, went through when graphing calculators were introduced. The initial reaction was to ban them; the right answer, he says, was to embrace and use them to enhance learning. “It was a multiyear process with a lot of trying and testing and evaluating and assessing.” Similarly, he anticipates a variety of approaches on his campus. – Chronicle of Higher Ed

Gibson, who has been teaching for 25 years, likened it to more familiar tech tools that enhance, not replace, learning and critical thinking. “I don’t know how to do it well yet, but I want AI chatbots to become like calculators for writing,” she says. Gibson’s view of ChatGPT as a teaching tool, not the perfect cheat, brings up a crucial point: ChatGPT is not intelligent in the way people are, despite its ability to spew humanlike text. It is a statistical machine that can sometimes regurgitate or create falsehoods and often needs guidance and further edits to get things right. – Wired

In the past, near-term prohibitions on slide rules, calculators, word processors, spellcheck, grammar check, internet search engines and digital texts have fared poorly. They focus on in-course tactics rather than on the shifting contexts of what students need to know and how they need to learn it. Reframing questions about AI writers will drive assignment designs and assessments that can minimize academic integrity concerns while promoting learning outcomes. – Inside Higher Ed

Judging from the reaction on TikTok, teachers on the app see ChatGPT as a tool to be treated the same way calculators and cell phones are used in class — as resources to help students succeed but not do the work for them. – Mashable

Professors wondered whether students would lean on the technology as a crutch. “Just as some feared that pocket calculators would cause schoolchildren to forget their multiplication tables, some professors worry that students will learn how to use graphical calculators without learning the concepts of mathematics,” The Chronicle reported in 1992. “[Students] know the information is a quick Google search away,” one professor wrote in a 2015  op-ed for The Chronicle encouraging professors to ban the use of calculators found on laptops and phones during exams. “What’s the point of memorizing it, they want to know.” Despite those fears, the use of calculators in math classrooms and the drum of keyboards in lecture halls are now commonplace. “The calculator changes the kinds of questions that you can ask students,” one professor told The Chronicle in 1992. “A lot of problems we used to assign were very artificial, so the numbers would come out nicely. Today we don’t need to worry about that so much. The problems aren’t harder, but they’re not as neat.” – Chronicle of Higher Ed

Every time a new technology is introduced, we find ourselves struggling with how it forces people to rethink the things they do. The best comparison, he said, is calculators, which, like ChatGPT, many found threatening to education. The worry, he explained, was about the possibility of calculators and statistical software eventually replacing mathematicians. – Grid 

The New York City Department of Education has banned ChatGPT in its schools, as has the University of Sciences Po, in Paris, citing concerns it may foster rampant plagiarism and undermine learning. Other professors openly encourage use of chatbots, comparing them to educational tools like a calculator, and argue teachers should adapt curriculums to the software. “Do you want to go to war with your students over AI tools?” said Ian Linkletter, who serves as emerging technology and open-education librarian at the British Columbia Institute of Technology. “Or do you want to give them clear guidance on what is and isn’t okay, and teach them how to use the tools in an ethical manner?” “There are lots of years when the pocket calculator was used for all math ever, and you walked into a classroom and you weren’t allowed to use it,” he said. “It took probably a generational switch for us to realize that’s unrealistic.” Educators must grapple with the concept of “what does it mean to test knowledge.” In this new age, he said, it will be hard to get students to stop using AI to write first drafts of essays, and professors must tailor curriculums in favor of other assignments, such as projects or interactive work. “Pedagogy is going to be different,” he said. “And fighting [AI], I think it’s a losing battle.” – Washington Post 

In an academic context, we should approach language models as engines for provisional reasoning — “calculators for words,” as British programmer Simon Willison calls them. Instead of assuming that the model already has an answer to every question in memory, this approach provides, in the prompt, any special assumptions or background knowledge the model will be expected to use. – Chronicle of Higher Ed

Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for arithmetic speed. Once online search became widely available, colleges taught students how to properly cite online sources. Some have explored banning AI in education. That would be hard to enforce; it’s also unhealthy, as students will need to function in an AI-infused workplace upon graduation. – Chronicle of Higher Ed 

Every generation of students comes of age with new technology. From the calculator and the personal laptop to smartphones to Zoom, each has been initially met with angst about the disruption to traditional teaching. We fear foundational knowledge will be replaced by robotic inputs and outputs, or that personal interactions unmediated by screens will be eliminated. And so the new technology can seem an obstacle to the parts of the educational experience we love the most — the look when a student first grasps a difficult concept, the spark from an original idea during a brainstorming session, the give-and-take of a classroom debate. – Chronicle of Higher Ed

Practically speaking, I’m treating GPT like a calculator: Most of us used calculators in math class and still didn’t get perfect grades. After discovering my first ChatGPT essay, I decided that going forward, students can use generative A.I. on assignments, so long as they disclose how and why. I’m hoping this will lead to less banging my head against the kitchen table–and, at its best, be its own kind of lesson. – Slate

As academe adjusts to a world with ChatGPT, faculty will need to find fresh ways to assess students’ writing.The same was true when calculators first began to appear in math classrooms, and professors adapted the exams. “Academic integrity is about being honest about the way you did your work.” Spell checkers, David Rettinger, president emeritus at the International Center for Academic Integrity, pointed out, are a prime example of artificial intelligence that may have been controversial at first, but are now used routinely without a second thought to produce papers. – Chronicle of Higher Ed

Just as calculators and the internet once upended teaching and learning, generative AI represents “a new, major disruption,” says Mike Prizament, senior product marketing manager at Adobe. “It’s also an opportunity to tackle the main challenges in higher education.” – EdTech

Wonder & Fear

The emergent properties of the latest large language models — their ability to stitch together what seems to pass for a primitive form of knowledge of the workings of our world — are not well understood. In the absence of understanding, the collective reaction to early encounters with this novel technology has been marked by an uneasy blend of wonder and fear.

It is not at all clear — not even to the scientists and programmers who build them — how or why the generative language and image models work. And the most advanced versions of the models have now started to demonstrate what one group of researchers has called “sparks of artificial general intelligence,” or forms of reasoning that appear to approximate the way that humans think.

Alexander Karp, CEO of Palantir Technologies, a company that creates data analysis software and works with the U.S. Department of Defense, writing in the New York Times

The Potential of AI using Liquid Neural Networks

Large language models like ChatGPT and Dall-E have billions of parameters, and each improved model increases in size and complexity. Researchers at an MIT lab believe artificial intelligence can make a leap forward by going smaller. Their experiments show liquid neural networks beat other systems when navigating in unknown environments. “Liquid neural networks could generalize to scenarios that they had never seen, without any fine-tuning, and could perform this task seamlessly and reliably.” They also open the proverbial black box of the system’s decision-making process, which could help to root out bias and other undesirable elements in an AI model. The results have immediate implications for robotics, navigation systems, smart mobility, and beyond toward predicting financial and medical events. Read more here.

The Coming Textpocalypse

From a piece of genre fiction to your doctor’s report, you may not always be able to presume human authorship behind whatever it is you are reading. Writing, but more specifically digital text—as a category of human expression—will become estranged from us. 

Am I worried that ChatGPT could have (written this article) better? No. But I am worried it may not matter. Swept up as training data for the next generation of generative AI, my words here won’t be able to help themselves: They, too, will be fossil fuel for the coming textpocalypse.

Matthew Kirschenbaum writing in The Atlantic

A new version of ChatGPT

UPDATE MARCH 14 PM: Access to OpenAI’s GPT-4 will be available to users who "sign up to the waitlist and for subscribers of the premium paid-for ChatGPT Plus in a limited, text-only capacity,” GPT-4 is superior to the previous version of the program, but it “can still generate biased, false, and hateful text; it can also still be hacked to bypass its guardrails.” The MIT Tech Review has more information here.

POSTED MARCH 14 AM: The next ChatGPT update is coming soon. OpenAI released GPT-3.5 in November. A Microsoft executive recently implied the launch of GPT-4 is just days away. It will apparently be a multimodal tool, able to translate users’ text into images, audio and video. AI multimodal tools are not new. Meta released its "Make-A-Video" option last year, which creates a video based on a short prompt. OpenAI’s CEO has warned that many rumors about GPT-4 on the internet are “ridiculous.”

Technology & the Ability to Focus

There are two schools of thought on attention. The first argues that we haven’t lost our ability to focus, it has been wrested, even “stolen”, from us by technology. In this view we’re little more than lab rats lured by notifications and algorithms, pings and dings in a large-scale social experiment. We may develop strategies for resisting those dopamine dispensers, such as blocking software or switching to a “brick phone”. But the game is rigged against us.

Those in the second camp may scoff at this: they maintain that most of our struggles with focus are more to do with self-control. There is no notification that can distract us unless we are on some level willing to be distracted. Even the notion of a “shorter attention span” may provoke skepticism.

Instead, could it be that you’re just not that motivated? Whichever worldview you subscribe to – that our attention has been hijacked by our devices, or by our lack of self-discipline – they share an element of fatalism: there is either little you can do, or you’re just not doing enough. 

Elle Hunt writing in The Guardian

Deepfakes Flourish

Deepfake technology — software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress”more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.

In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

Read more about Deep Fakes in the New York Times

Will AI doom or save us?

Every new development is condemned as likely leading to the ruination of what has come before. It's a desire to protect our comforts and memories—combined with the fear of losing power and control to the unknown. Anything that forces us to shift our identity is met with resistance. From Socrates (believing the written word is not an ineffective means of communicating knowledge) down through the printing press, radio, TV, computers, etc., the temptation is to condemn new technology for the very fact it is new and unfamiliar. The other temptation is the opposite (deeming it better simply because it is new and shiny). 

Considering that Riepl’s Law seems to be holding (old technology is not replaced, but fades in importance) and each iteration of technology can be used for good or evil, it seems the best attitude toward technology is to see it is as presenting a both/and situation and not an either/or dilemma. 

Will AI doom/save us? Probably. 

Stephen Goforth

A basic explanation of the new AI bot called ChatGPT

U.S.-based AI research company OpenAI, the San Francisco company behind the text-to-image creation tool named DALL-E, has created a chatbot that responds to user-submitted queries. The model was trained using reinforcement learning from human feedback. ChatGPT (GPT stands for “generative pre-trained transformer”) shows how far artificial intelligence—particularly AI text generators—has come. Because it remembers what you've written or said, the interaction has a dynamic conversational feel. That makes it different from other chatbots, which are static. It could be the basis for a medical chatbot to answer patient questions about very specific symptoms or serve as a personalized therapy bot.

Give the software a prompt — and it creates articles, even poetry. It writes code, too. And explains the code. Or makes correction to errors in code. GPT-3 came before it. Both are generative models. That means they are trained to predict the next word in a sentence. It’s like a topnotch autocompletion tool. What separates ChatGPT from GPT-3 is that ChatGPT goes beyond predicting the next word to also follow the users instructions. Training with examples of human conversations has made the experience with the bot more familiar to users.

ChatGPT is being used to rewrite literary classics, create a bible song about ducks, string cheese sonnet, explain scientific concepts, explain how to remove a peanut butter sandwich from a VCR in the style of the King James Bible, or write a story about a fictitious Ohio-Indiana war. The New York Times gushes, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.” Some tech observers predict it could one day replace Google.

But the software has limitations: such as not having information about 2022 because it doesn’t “crawl” the web like Google in search of new information, it can spit out “plausible-sounding but incorrect answers,” and while it is designed to not provide inappropriate content as creators have taken steps to avoid racist, sexist and offensive outputs that have popped out of other chatbot, there is likely to be some hiccups in that process.

Some warn about its potential abuse—blurring the lines between original writing and plagiarism.

Mike Sharples, a U.K. professor, says such technology “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” 

Ars Technica reporter Benj Edwards writes:

"[I]t’s possible that OpenAI invented history’s most convincing, knowledgeable and dangerous liar — a superhuman fiction machine that could be used to influence masses or alter history." 

Decide for yourself whether we’re on the cusp of new creativity or massive fraud. Create a free account using your email here. Or try the Twitter bot if you’d prefer not to sign up.

Articles about ChatGPT: 

New AI chatbot is scary good – Axios

OpenAI’s new chatbot ChatGPT could be a game-changer for businesses – Tech Monitor  

Google is done. Here’s why OpenAI’s ChatGPT Will Be a Game Changer – Luca Petriconi

The College Essay Is Dead – The Atlantic

The Brilliance and Weirdness of ChatGPT – New York Times

ChatGPT Is Dumber Than You Think - The Atlantic

The Lovelace Effect – AI generated texts should lead us to re-value creativity in academic writing - London School of Economics

Hugging Face GPT-2 Output Detector

AI is finally good at stuff, and that’s a problem - Vox

ChatGPT: How Does It Work Internally? - Toward AI

Your Creativity Won’t Save Your Job From AI - The Atlantic

Could your public photos be used in an AI deepfake? - Ars Technica

API access is expected early in 2023, so companies can create products based on the software. Later next year, rumors say OpenAI will introduce an even better AI model named GPT-4.

Taylor Swift’s Metaverse

Is Taylor Swift doing a better job at building a metaverse than Mark Zuckerberg right now?

Well, in the sense that Mark Zuckerberg is almost totally failing, yeah. This may seem like a leap, but a metaverse—a futuristic virtual-reality world—is essentially a shared online experience, which is not all that different from the online fanscape that Swifties inhabit. It sounds like the Swifties might be living in something that is pretty close to a metaverse currently. They’ll go wherever she goes. So it’s not a virtual world, but it’s a virtual community. That’s really what makes the metaverse and metaverse platforms powerful. People building metaverse platforms, most of them think it’s a technology question. But it’s really a community and culture question.

Wagner James Au quoted in The Atlantic

Metaverse Manipulation

The metaverse will usher in a new age of mass customization of influence and manipulation.

A political candidate is giving a speech to millions of people. While each viewer thinks they are seeing the same version of the candidate, in virtual reality they are actually each seeing a slightly different version. For each and every viewer, the candidate’s face has been subtly modified to resemble the viewer. This is done by blending features of each viewer’s face into the candidate’s face. The viewers are unaware of any manipulation of the image. Yet they are strongly influenced by it.

Rand Waltzman writing in the New York Times

8 Password Managers

1Password*
This popular password manager stores them in your Web browser, allowing users to only have one master password to access them all. Makes good use of the cloud to keep it in sync with all your devices. Starts at $36. 14 day free trial option.

Bitwarden*
The free version of this open source password manager is one of the best but it bare bones so if you want more and are willing to pay you’ll find better. User experience not as intuitive as other options. $10 a year for more options though storage is limited.

Dashline
A solid password manager with VPN service. Strong interface. $60 a year.

DataVault
Password manager to protect your data. Can sync through Dropbox. $10 for any operating system.

eWallet
Password manager to protect your data. Nice look, customizable. The Windows version is $20, others $10.

KeePassXC
Open source password manager. Purposely no cloud option. All passwords are stored locally. Free.

Keeper
Password manager. Well-designed interface. Free plan is limited. Paid starts at $34.99 a year.

Lastpass
Generates and saves passwords. Strong cross-platform experience. The company has had some past security issues. Free version is limited. Paid is $36 a year.

More Tech Tools