5 Tips for a Healthy Use of AI

The following strategies can help you maintain a healthy balance between your expertise and AI assistance:

  1. Generate rough drafts from notes, rather than from a blank page: It’s fine to generate drafts with AI, but do your thinking first, put together some structured notes, and treat AI-generated content as a first draft that requires critical review and substantial editing. This approach can help mitigate the risk of anchoring bias.

  2. Rotate between AI-assisted and non-assisted writing: To develop and maintain your own writing skills, interweave AI tools into your writing workflow, rather than relying on them for chunks of text. This will also help you maintain your own voice.

  3. Customize AI prompts: Learn to craft specific prompts that guide the AI to produce more relevant and useful outputs for your particular needs.

  4. Ethical considerations: Be transparent about AI use, especially in academic writing, and follow any guidelines or policies set by your institution or publication venues.

  5. Fact-check and verify: Always verify facts, citations and specific claims made by AI. These tools have a tendency to generate “hallucinations,” plausible-sounding but inaccurate chunks of information.

From The Transmitter

26 Recent Articles about AI & Writing

From bench to bot: Does AI really make you a more efficient writer? - The Transmitter: Neuroscience News and Perspectives

Did an AI write up your arrest? Hard to know – Politico

AI Editing: Are We There Yet? - Science Editor

How do I cite generative AI in MLA style? - Modern Language Association

I tested 7 AI content detectors - they're getting dramatically better at identifying plagiarism – Zdnet 

OpenAI says it’s taking a ‘deliberate approach’ to releasing tools that can detect writing from ChatGPT  - Tech Crunch  

AI is complicating plagiarism. How should scientists respond? – Nature

The telltale words that could identify generative AI text - Arstechnica

Research shows that AI-generated slop overuses specific words – Futurism

AI took their jobs. Now they get paid to make it sound human – BBC  

AI and the Death of Student Writing – Chronicle of Higher Ed

Software that detects ‘tortured acronyms’ in research papers could help root out misconduct | Science | AAAS – Science

How AI Reshapes Vocabulary: Unveiling the Most Used Terms Related to the Technology – Every Pixel  

How to tell if something is written by ChatGPT – Read Write 

Coursera Launches AI Plagiarism Detector – Inside Higher Ed 

I Tested Three AI Essay-writing Tools, and Here’s What I Found – Life Hacker

New study on AI-assisted creativity reveals an interesting social dilemma – Psypost  

How to cite ChatGPT in APA Style –  American Psychological Association

Is ChatGPT a Reliable Ghostwriter? - The Journal of Nuclear Medicine 

AI Is Coming for Amateur Novelists. That’s Fine. - The Atlantic

National Novel Writing Month faces backlash over allowing AI: What to know – Washington Post

How Do You Change a Chatbot’s Mind?, I discovered a new world of A.I. manipulation. – New York Times 

If journalism is going up in smoke, I might as well get high off the fumes: confessions of a chatbot helper – The Guardian  

College Writing Centers Worry AI Could Replace Them – EdSurge

No laughing matter - how AI is helping comedians write jokes – BBC

What Teachers Told Me About A.I. in School - New York Times

Clear Writing vs Legalize

MIT cognitive scientists set out to determine why laws are written in an incomprehensible style. Lawyers don’t like it. Your average person doesn’t like it, so why does it persist? The researchers theorized that legal writers start by coming up with a main idea but then they keep finding reasons to qualify the rules, and soon the writing is overly complicated. It turns out that wasn’t it at all. When they had people try to write laws, they immediately adopted a convoluted style of legal language. It’s called the "magic spell hypothesis." The researchers say, “Just as magic spells are written with a distinctive style that sets them apart from everyday language, using legal language appears to signal a special kind of authority.” Academic writing is similar. When students are asked to write something for a class, they immediately adopt the overly-formal writing style of academics.  

More: Study explains why laws are written in an incomprehensible style

A Personal Notebook

Successful people track their progress, set goals, reflect, and learn from their mistakes. And they often use some kind notebook to accomplish this. If you want to get somewhere in life, you need a map, and this notebook is that map. You can write down what you did today, what you tried to accomplish, where you made mistakes, and so forth. It’s a place to reflect. It’s a place to capture important thoughts. It’s a place to be able to track where you’ve been and where you intend to go. It’s one of the most underused, yet incredibly effective tools available to the masses.

Angel Chernoff

21 Recent Articles about AI & Writing

14 quotes worth reading about students using AI

Bots like ChatGPT show great promise as a “writing consultant” for students. “It’s not often that students have a chance to sit down with a professor and have long discussions about how to go about this paper, that paper, how to approach research on this topic and that topic. But ChatGPT can do that for them, provided…they know how to use the right ethics, to use it as a tool and not a replacement for their work.” CalMatters 

Don’t rely on AI to know things instead of knowing them yourself. AI can lend a helping hand, but it’s an artificial intelligence that isn’t the same as yours. One scientist described to me how younger colleagues often “cobble together a solution” to a problem by using AI. But if the solution doesn’t work, “they don’t have anywhere to turn because they don’t understand the crux of the problem” that they’re trying to solve. Chronicle of Higher Ed

Janine Holc thinks that students are much too reliant on generative AI, defaulting to it, she wrote, “for even the smallest writing, such as a one sentence response uploaded to a shared document.” As a result, wrote Holc, a professor of political science at Loyola University Maryland, “they have lost confidence in their own writing process. I think the issue of confidence in one’s own voice is something to be addressed as we grapple with this topic.” Chronicle of Higher Ed

It’s a conversation that can be evoked at will. But it’s not different in the content. You still have to evaluate what someone says and whether or not it’s sensible. CalMatters 

Helena Kashleva, an adjunct instructor at Florida SouthWestern State College, spots a sea-change in STEM education, noting that many assignments in introductory courses serve mainly to check students’ understanding. “With the advent of AI, grading such assignments becomes pointless.” Chronicle of Higher Ed 

Given how widely faculty members vary on what kinds of AI are OK for students to use, though, that may be an impossible goal. And of course, even if they find common ground, the technology is evolving so quickly that policies may soon become obsolete. Students are also getting more savvy in their use of these tools. It’s going to be hard for their instructors to keep up. Chronicle of Higher Ed 

In situations when you or your group feel stuck, generative AI can definitely help. The trick is to learn how to prompt it in a way that can help you get unstuck. Sometimes you’ll need to try a few prompts up until you’ll get something you like.  UXdesign.cc

Proponents contend that classroom chatbots could democratize the idea of tutoring by automatically customizing responses to students, allowing them to work on lessons at their own pace. Critics warn that the bots, which are trained on vast databases of texts, can fabricate plausible-sounding misinformation — making them a risky bet for schools. New York Times

Parents are eager to have their children use the generative AI technology in the classroom. Sixty-four percent said they think teachers and schools should allow students to use ChatGPT to do schoolwork, with 28 percent saying that schools should encourage the technology’s use. Ed Week

Student newspaper editors at Middlebury College have called for a reconsideration of the school’s honor code after a survey found two-thirds of students admitted to breaking it—nearly twice as many as before the pandemic. Wall Street Journal 

If you are accused of cheating with AI Google Docs or Microsoft Word could help. Both offer a version history function that can keep track of changes to the file, so you can demonstrate how long you worked on it and that whole chunks didn’t magically appear. Some students simply screen record themselves writing. Washington Post 

There is no bright line between “my intelligence” and “other intelligence,” artificial or otherwise. It’s an academic truism that no idea exists in an intellectual vacuum. We use other people’s ideas whenever we quote or paraphrase. The important thing is how. Chronicle of Higher Ed

Quizlet has announced four new AI features that will help with student learning and managing their classwork, including Magic Notes, Memory Score, Quick Summary, and AI-Enhanced Expert Solutions.  ZDnet 

James Neave, Adzuna’s head of data science, recommends interested job applicants build up their AI skills and stand out from the competition in three key ways: Stay on top of developments, use AI in your own work, and show how you’ve used AI successfully to achieve a specific goal. CNBC

8 good quotes about students cheating with AI   

Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate? Chronicle of Higher Ed 

What about students cheating by using ChatGPT instead of doing their own writing? The thing about technology is that it is interfering with the very weak proxies we have of measuring student learning, namely homework and tests. (Generative AI) is just another reminder that it’s actually really hard to know how much someone has learned something, and especially if we’re not talking to them directly but relying on some scaled up automated or nearly automated system to measure it for us. MathBabe Cathy O’Neil

Sometimes, though, professors who felt they had pretty strong evidence of AI usage were met with excuses, avoidance, or denial. Bridget Robinson-Riegler, a psychology professor at Augsburg University, in Minnesota, caught some obvious cheating (one student forgot to take out a reference ChatGPT had made to itself) and gave those students zeros. But she also found herself having to give passing grades to others even though she was pretty sure their work had been generated by AI (the writings were almost identical to each other). Chronicle of Higher Ed 

As professors of educational psychology and educational technology, we’ve found that the main reason students cheat is their academic motivation. The decision to cheat or not, therefore, often relates to how academic assignments and tests are constructed and assessed, not on the availability of technological shortcuts. When they have the opportunity to rewrite an essay or retake a test if they don’t do well initially, students are less likely to cheat. The Conversation

Lorie Paldino, an assistant professor of English and digital communications at the University of Saint Mary, in Leavenworth, Kan., described how she asked one student, who had submitted an argument-based research essay, to bring to her the printed and annotated articles they used for research, along with the bibliography, outline, and other supporting work. Paldino then explained to the student why the essay fell short: It was formulaic, inaccurate, and lacked necessary detail. The professor concluded with showing the student the Turnitin results and the student admitted to using AI. Chronicle of Higher Ed 

Our research demonstrates that students are more likely to cheat when assignments are designed in ways that encourage them to outperform their classmates. In contrast, students are less likely to cheat when teachers assign academic tasks that prompt them to work collaboratively and to focus on mastering content instead of getting a good grade. The Conversation

A common finding (from our survey): Professors realized they needed to get on top of the issue more quickly. It wasn’t enough to wait until problems arose, some wrote, or to simply add an AI policy to their syllabus. They had to talk through scenarios with their students. Chronicle of Higher Ed 

Matthew Swagler, an assistant professor of history at Connecticut College, had instituted a policy that students could use a large language model for assistance, but only if they cited its usage. But that wasn’t sufficient to prevent misuse, he realized, nor prevent confusion among students about what was acceptable. He initiated a class discussion, which was beneficial: “It became clear that the line between which AI is acceptable and which is not is very blurry, because AI is being integrated into so many apps and programs we use.”  Chronicle of Higher Ed

30 Great Quotes about AI & Education

ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content. The tool seems more beneficial for those who already have a lot of experience writing–not those learning how to develop ideas, organize thinking, support propositions with evidence, conduct independent research, and so on. Critical AI

The question isn’t “How will we get around this?” but rather “Is this still worth doing?” The Atlantic

The reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Chronicle of Higher Ed

If you’re a college student preparing for life in an A.I. world, you need to ask yourself: Which classes will give me the skills that machines will not replicate, making me more distinctly human? New York Times 

The student who is using it because they lack the expertise is exactly the student who is not ready to assess what it’s doing critically. Chronicle of Higher Ed 

It used to be about mastery of content. Now, students need to understand content, but it’s much more about mastery of the interpretation and utilization of the content. Inside Higher Ed

Don’t fixate on how much evidence you have but on how much evidence will persuade your intended audience. ChatGPT distills everything on the internet through its filter and dumps it on the reader; your flawed and beautiful mind, by contrast, makes its mark on your subject by choosing the right evidence, not all the evidence. Chronicle of Higher Ed 

The more effective, and increasingly popular, strategy is to tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Then rewrite them yourself to make them flow better. Chronicle of Higher Ed

A.I. will force us humans to double down on those talents and skills that only humans possess. The most important thing about A.I. may be that it shows us what it can’t do, and so reveals who we are and what we have to offer. New York Times

Even if detection software gets better at detecting AI generated text, it still causes mental and emotional strain when a student is wrongly accused. “False positives carry real harm,” he said. “At the scale of a course, or at the scale of the university, even a one or 2% rate of false positives will negatively impact dozens or hundreds of innocent students.” Washington Post

Ideas are more important than how they are written. So, I use ChatGPT to help me organize my ideas better and make them sound more professional. The Tech Insider

A.I. is good at predicting what word should come next, so you want to be really good at being unpredictable, departing from the conventional. New York Times 

We surpass the AI by standing on its shoulders. You need to ask, ‘How is it possibly incomplete?’” Inside Higher Ed

Our students are not John Henry, and AI is not a steam-powered drilling machine that will replace them. We don’t need to exhaust ourselves trying to surpass technology. Inside Higher Ed

These tools can function like personal assistants: Ask ChatGPT to create a study schedule, simplify a complex idea, or suggest topics for a research paper, and it can do that. That could be a boon for students who have trouble managing their time, processing information, or ordering their thoughts. Chronicle of Higher Ed

If the data set of writing on which the writing tool is trained reflects societal prejudices, then the essays it produces will likely reproduce those views. Similarly, if the training sets underrepresent the views of marginalized populations, then the essays they produce may omit those views as well. Inside Higher Ed

Students may be more likely to complete an assignment without automated assistance if they’ve gotten started through in-class writing. Critical AI

Rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Chronicle of Higher Ed

“I think we should just get used to the fact that we won’t be able to reliably tell if a document is either written by AI — or partially written by AI, or edited by AI — or by humans,” computer science professor Soheil Feizi said. Washington Post

(A professor) plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.New York Times

ChatGPT can play the role of a debate opponent and generate counterarguments to a student’s positions. By exposing students to an endless supply of opposing viewpoints, chatbots could help them look for weak points in their own thinking. MIT Tech Review

Assign reflection to help students understand their own thought processes and motivations for using these tools, as well as the impact AI has on their learning and writing. Inside Higher Ed 

Discuss students’ potentially diverse motivations for using ChatGPT or other generative AI software. Do they arise from stress about the writing and research process? Time management on big projects? Competition with other students? Experimentation and curiosity about using AI? Grade and/or other pressures and/or burnout? Invite your students to have an honest discussion about these and related questions. Cultivate an environment in your course in which students will feel comfortable approaching you if they need more direct support from you, their peers, or a campus resource to successfully complete an assignment. Barnard College 

We will need to teach students to contest it. Students in every major will need to know how to challenge or defend the appropriateness of a given model for a given question. To teach them how to do that, we don’t need to hastily construct a new field called “critical AI studies.” The intellectual resources students need are already present in the history and philosophy of science courses, along with the disciplines of statistics and machine learning themselves, which are deeply self-conscious about their own epistemic procedures. Chronicle of Higher Ed

We should be telling our undergraduates that good writing isn’t just about subject-verb agreement or avoiding grammatical errors—not even good academic writing. Good writing reminds us of our humanity, the humanity of others and all the ugly, beautiful ways in which we exist in the world. Inside Higher Ed 

Rather than trying to stop the tools and, for instance, telling students not to use them, in my class I’m telling students to embrace them – but I expect their quality of work to be that much better now they have the help of these tools. Ultimately, by the end of the semester, I'm expecting the students to turn in assignments that are substantially more creative and interesting than the ones last year’s students or previous generations of students could have created. We Forum 

Training ourselves and our students to work with AI doesn’t require inviting AI to every conversation we have. In fact, I believe it’s essential that we don’t.  Inside Higher Ed

If a professor runs students’ work through a detector without informing them in advance, that could be an academic-integrity violation in itself.  The student could then appeal the decision on grounds of deceptive assessment, “and they would probably win.” Chronicle of Higher Ed

How might chatting with AI systems affect vulnerable students, including those with depression, anxiety, and other mental-health challenges? Chronicle of Higher Ed 

Are we going to fill the time saved by AI with other low-value tasks, or will it free us to be more disruptive in our thinking and doing? I have some unrealistically high hopes of what AI can deliver. I want low-engagement tasks to take up less of my working day, allowing me to do more of what I need to do to thrive (thinking, writing, discussing science with colleagues). Nature

17 articles about AI & Academic Scholarship

Scientific authorship in the time of ChatGPT - Chemistry

AI could rescue scientific papers from the curse of jargon – Free Think

Science journals ban listing of ChatGPT as co-author on papers – The Guardian

ChatGPT listed as author on research papers: many scientists disapprove – Nature (subscription req)

Abstracts written by ChatGPT fool scientists – Nature (subscription req)

The World Association of Medical Editors has created guidelines for the use of ChatGPT and other chatbots - Medscape (sub req)  

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation

It’s Not Just Our Students — ChatGPT Is Coming for Faculty Writing – Chronicle of Higher Ed 

As scientists explore AI-written text, journals hammer out policies – Science

AI writing tools could hand scientists the ‘gift of time’ – Nature

ChatGPT Is Everywhere Love it or hate it, academics can’t ignore the already pervasive technology– Chronicle of Higher Ed

Academic Publishers Are Missing the Point on ChatGPT – Scholarly Kitchen

AI Is Impacting Education, but the Best Is Yet to Come – Inside Higher Ed 

AI makes plagiarism harder to detect, argue academics – in paper written by chatbot – The Guardian

How to Cite ChatGPT – APA Style

Researchers claim to have developed tool capable of detecting scientific text generated by ChatGPT with 99% accuracy – University of Kansas

ChatGPT: five priorities for research – The Journal Nature

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

14 quotes worth reading about AI use in academic papers                       

13 Quotes worth reading about AI’s impact on College Administrators & Faculty

17 articles about AI & Academic Scholarship            

14 quotes worth reading about AI use in academic papers

ScienceElsevier and Nature were quick to react, updating their respective editorial and publishing policies, stating unconditionally that ChatGPT can’t be listed as an author on an academic paper. It is very hard to define exactly how GPT is used in a particular study as some publishers demand, the same way it is near impossible for authors to detail how they used Google as part of their research. Scholarly Kitchen

An app I have found useful every day is Perplexity. I am most taken with the auto-embedded citations of sources in the response, much like we do in research papers. This is most useful for deeper digging into topics. Inside Higher Ed 

Tools such as Grammarly, Writeful, and even Microsoft grammar checker are relied upon heavily by authors. If an author is using GPT for language purposes, why would that need to be declared and other tools not? What if authors get their ideas for new research from ChatGPT or have GPT analyze their results but write it up in their own words; might that be ok because the author is technically doing the writing? I believe that self-respecting researchers won’t use GPT as a primary source the same way they don’t use Wikipedia in that manner. However, they can use it in a myriad of other ways including brainstorming, sentence construction, data crunching, and more. The onus of responsibility for the veracity of information still falls on the researcher but that doesn’t mean we should run to ban because some might use it as a way to cut corners. Scholarly Kitchen

An academic paper entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an education journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”. What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT. The Guardian

An application that holds great potential to those of us in higher ed is ChatPDF! It is what you might imagine, a tool that allows you to load a PDF of up to 120 pages in length. You can then apply the now-familiar ChatGPT analysis approach to the document itself. Ask for a summary. Dig into specifics. This will be a useful tool for reviewing research and efficiently understanding complex rulings and other legal documents. Inside Higher Ed

If you’ve used ChatGPT or other AI tools in your research, (for APA) describe (in your academic paper) how you used the tool in your Method section or in a comparable section of your paper. For literature reviews or other types of essays or response or reaction papers, you might describe how you used the tool in your introduction. In your text, provide the prompt you used and then any portion of the relevant text that was generated in response. You may also put the full text of long responses from ChatGPT in an appendix of your paper or in online supplemental materials, so readers have access to the exact text that was generated. If you create appendices or supplemental materials, remember that each should be called out at least once in the body of your APA Style paper. APA Style 

Outside of the most empirical subjects, the determinants of academic status will be uniquely human — networking and sheer charisma — making it a great time to reread Dale Carnegie’s How to Win Friends and Influence People. Chronicle of Higher Ed 

The US journal Science, announced an updated editorial policy, banning the use of text from ChatGPT and clarifying that the program could not be listed as an author. Leading scientific journals require authors to sign a form declaring that they are accountable for their contribution to the work. Since ChatGPT cannot do this, it cannot be an author. The Guardian

A chatbot was deemed capable of generating quality academic research ideas. This raises fundamental questions around the meaning of creativity and ownership of creative ideas — questions to which nobody yet has solid answers. Our suspicion here is that ChatGPT is particularly strong at taking a set of external texts and connecting them (the essence of a research idea), or taking easily identifiable sections from one document and adjusting them (an example is the data summary — an easily identifiable “text chunk” in most research studies). A relative weakness of the platform became apparent when the task was more complex - when there are too many stages to the conceptual process. The Conversation 

Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture. Chronicle of Higher Ed 

For most professors, writing — even bad first drafts or outlines — requires our labor (and sometimes strain) to develop an original thought. If the goal is to write a paper that introduces boundary-breaking new ideas, AI tools might reduce some of the intellectual effort needed to make that happen. Some will see that as a smart use of time, not evidence of intellectual laziness. Chronicle of Higher Ed

The quality of scientific research will erode if academic publishers can't find ways to detect fake AI-generated images in papers. In the best-case scenario, this form of academic fraud will be limited to just paper mill schemes that don't receive much attention anyway. In the worst-case scenario, it will impact even the most reputable journals and scientists with good intentions will waste time and money chasing false ideas they believe to be true. The Register 

Many journals’ new policies require that authors disclose use of text-generating tools and ban listing a large language model such as ChatGPT as a co-author, to underscore the human author’s responsibility for ensuring the text’s accuracy. That is the case for Nature and all Springer Nature journalsthe JAMA Network, and groups that advise on best practices in publishing, such as the Committee on Publication Ethics and the World Association of Medical Editors. Science

Just as publishers begin to get a grip on manual image manipulation, another threat is emerging. Some researchers may be tempted to use generative AI models to create brand-new fake data rather than altering existing photos and scans. In fact, there is evidence to suggest that sham scientists may be doing this already. A spokesperson for Uncle Sam's defense research agency confirmed it has spotted fake medical images in published science papers that appear to be generated using AI. The Register

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

13 Quotes worth reading about AI’s impact on College Administrators & Faculty

17 articles about AI & Academic Scholarship                                        

27 quotes about AI & writing assignments

Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic. The Atlantic

Its ability to do so well in that niche might be a reminder to us that we’ve allowed academic writing to become a little bit too tightly bound up in a predictable pattern. Maybe forcing us to stretch the kind of assignments we’re giving students is not a bad thing. Inside Higher Ed

The teaching of writing has too often involved teaching students to follow an algorithm. Your essay will have five paragraphs; start the first one in with a sentence about your main idea, then fill in three paragraphs with supporting ideas, then wrap it up with a conclusion. Call it a format or a template or an algorithm. Schools have taught students to assemble essays to satisfy algorithms for judging their writing—algorithms that may be used by either humans or software, with little real difference. If this kind of writing can be done by a machine that doesn’t have a single thought in its head, what does that tell us about what we’ve been asking of students. The unfortunate side effect is that teachers end up grading students not on the quality of their end product, but on how well they followed the teacher-required algorithm. Forbes

AI writing tools bring urgency to a pedagogical question: If a machine can produce prose that accomplishes the learning outcomes of a college writing assignment, what does that say about the assignment? Inside Higher Ed

ChatGPT is a dynamic demonstration that if you approach an essay by thinking “I’ll just write something about Huckelberry Finn,” you get mediocre junk. Better thinking about what you want the essay to be about, what you want it to say, and how you want to say it gets you a better result, even if you’re having an app do the grunt work of stringing words together. Forbes

AI is trained on large data sets; if the data set of writing on which the writing tool is trained reflects societal prejudices, then the essays it produces will likely reproduce those views. Similarly, if the training sets underrepresent the views of marginalized populations, then the essays they produce may omit those views as well. Inside Higher Ed

Artificial intelligence is likely to have some impact on how students write, according to John Gallagher, a professor in the English department at the University of Illinois. When word processors replaced typewriters, written sentences got longer and more complicated, he said. Wall Street Journal

In-class exams — the ChatGPT-induced alternative to writing assignments — are worthless when it comes to learning how to write, because no professor expects to see polished prose in such time-limited contexts. Washington Post 

Students will only gravitate to chat bots if the message they are getting from their writing instructors is that the most important qualities of writing are technical proficiency and correctness. Inside Higher Ed

Hold individual conferences on student writing or ask students to submit audio/video reflections on their writing. As we talk with students about their writing, or listen to them talk about it, we get a better sense of their thinking. By encouraging student engagement and building relationships, these activities could discourage reliance on automated tools. Critical AI 

It’s not easy to write like a human, especially now, when AI or the worn-in grooves of scholarly habits are right there at hand. Resist the temptation to produce robotic prose, though, and you’ll find that you’re reaching new human readers, in the way that only human writers can. Chronicle of Higher Ed

Here’s an idea for extracting something positive from the inevitable prominence that chatbots will achieve in coming years. My students and I can spend some class time critically appraising a chatbot-generated essay, revealing its shortcomings and deconstructing its strengths. Washington Post 

David Chrisinger, who directs the writing program at the Harris School of Public Policy at the University of Chicago is asking his students to generate a 600-word essay using ChatGPT. Then their assignment is to think of more incisive questions to elicit a stronger response. Finally, they are required to edit the essay for tone and voice and to tailor it to the intended audience. Wall Street Journal 

Instead of just presenting conclusions, give the reader a glimpse of your origin story as a researcher, a sense of the stumbling blocks you encountered along the way, and a description of the elation or illumination you felt when you experienced your eureka moment. If you tell stories, tell them well. Chronicle of Higher Ed 

Students may be more likely to complete an assignment without automated assistance if they’ve gotten started through in-class writing. (Note: In-class writing, whether digital or handwritten, may have downsides for students with anxiety and disabilities). Critical AI

In a world where students are taught to write like robots, a robot can write for them. Students who care more about their GPA than muddling through ideas and learning how to think will run to The Bot to produce the cleanest written English. The goal is to work through thoughts and further research and revision to land on something potentially messy but deeply thought out. Inside Higher Ed 

ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content. The tool seems more beneficial for those who already have a lot of experience writing–not those learning how to develop ideas, organize thinking, support propositions with evidence, conduct independent research, and so on. Critical AI 

What many of us notice about art or prose generated by A.I. It’s often bland and vague. It’s missing a humanistic core. It’s missing an individual person’s passion, pain, longings and a life of deeply felt personal experiences. It does not spring from a person’s imagination, bursts of insight, anxiety and joy that underlie any profound work of human creativity. New York Times 

The most obvious response, and one that I suspect many professors will pursue, involves replacing the standard five-page paper assignment with an in-class exam. Others expect to continue with the papers but have suggested that the assigned topics should be revised to focus on lesser-known works or ideas about which a chatbot might not “know” too much. Washington Post 

Assigning personal writing may still help motivate students to write and, in that way, deter misuse of AI. Chronicle of Higher Ed 

We’re expecting students to use ChatGPT to write a first draft of their paper but then not use it to revise the paper.  I don’t consider myself a pessimist about human nature, but in what world do we humans take a perfectly good tool that helped us get from point A to point B and then decline its offer to take us from point B to point C? Inside Higher Ed 

Writing teacher John Warner wrote, “If AI can replace what students do, why have students keep doing that?” He recommended changing “the way we grade so that the fluent but dull prose that ChatGPT can churn out does not actually pass muster.” Chronicle of Higher Ed

Assign writing that is as interesting and meaningful to students as possible. Connecting prompts to real-world situations and allowing for student choice and creativity within the bounds of the assignment can help. Chronicle of Higher Ed 

No one creates writing assignments because the artifact of one more student essay will be useful in the world; we assign them because the process itself is valuable. Through writing, students can learn how to clarify their thoughts and find a voice. If they understand the benefits of struggling to put words together, they are more likely not to resort to a text generator. Chronicle of Higher Ed

Really soon, we’re not going to be able to tell where the human ends and where the robot begins, at least in terms of writing. Chronicle of Higher Ed

Many teachers have reacted to ChatGPT by imagining how to give writing assignments now—maybe they should be written out by hand, or given only in class—but that seems to me shortsighted. The question isn’t “How will we get around this?” but rather “Is this still worth doing?” The Atlantic

Rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Chronicle of Higher Ed

As the co-editors of a book series on teaching in higher education, we receive many queries and proposals from academic writers. A significant percentage of those proposals — which often include sample chapters — are written in prose that reads like it was generated by ChatGPT. The author’s ideas are laid out like bullet points on a whiteboard, the citations are dense and numerous, and the examples and stories (if there are any) are pale and lifeless. The most successful books in our series are the ones that don’t read like that. Their authors have demolished — or at least weakened — the wall that separates their subject matter from their lives. Chronicle of Higher Ed

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                              

22 examples of teaching with AI                                                           

27 thoughts on teaching with AI   

13 thoughts on the problems of teaching with AI                                       

22 quotes about cheating with AI & plagiarism detection

Students should know that this technology is rapidly evolving: future detectors may be able to retroactively identify auto-generated prose from the past. No one should present auto-generated writing as their own on the expectation that this deception is undiscoverable. Inside Higher Ed

Alex Lawrence, professor at Weber State University, described it as “the greatest cheating tool ever invented.” Wall Street Journal

Some plagiarism detection and learning management systems have adapted surveillance techniques, but that leaves systems designed to ensure original work “locked in an arms race” with systems designed to cheat. Inside Higher Ed

Popular essay submission portal Turnitin is developing its own detector, and Hive claims that its service is more accurate than others on the market, including OpenAI’s very own, and some independent testers have agreed. Tech Radar 

While faculty members will likely spend some time trying to identify a boundary line between AI assistance and AI cheating with respect to student writing, that may not be the best use of their time. That path leads to trying to micromanage students’ use of these models. Inside Higher Ed

You can have tools like Quillbot (that can) paraphrase the essays ChatGPT gives you so it doesn't look too obvious. Mashable

“If I’m a very intelligent AI and I want to bypass your detection, I could insert typos into my writing on purpose.” said Diyi Yang, assistant professor of computer science at Stanford University.  Inside Higher Ed 

But what about the cheaters, the students who let a chatbot do their writing for them? I say, who cares? In my normal class of about 28 students, I encounter one every few semesters whom I suspect of plagiarism. Let’s now say that the temptation to use chatbots for nefarious ends increases the number of cheaters to an (unrealistic) 20 percent. It makes no sense to me that I should deprive 22 students who can richly benefit from having to write papers only to prevent the other six from cheating (some of whom might have cheated even without the help of a chatbot). Washington Post 

If a teacher’s concern is that students will “cheat” with ChatGPT, the answer is to give assignments that are personal and focused on thinking. We don’t have to teach students to follow a writing algorithm any more; there’s an app for that. Forbes

What’s to stop a student from getting ChatGPT to write their work, then tweak it slightly until it no longer gets flagged by a classifier? This does take some effort, but a student may still find this preferable to writing an entire assignment themselves. Tech Radar 

If the concern is that students could cheat, it’s worth remembering that they could cheat six months ago and 60 years ago. Students taking a brand-new exam could already get answers to test questions in minutes from services like Chegg. Students could already plagiarize — or pay someone else to write their entire paper. With the entrance of ChatGPT, “what’s changed is the ease and the scope. Chronicle of Higher Ed

If ChatGPT makes it easy to cheat on an assignment, teachers should throw out the assignment rather than ban the chatbot. MIT Tech Review

Professors can create conditions in which cheating is difficult, giving closed-book, closed-note, closed-internet exams in a controlled environment. They can create assignments in which cheating is difficult, by asking students to draw on what was said in class and to reflect on their own learning. They can make cheating less relevant, by letting students collaborate and use any resource at their disposal. Or they can diminish the forces that make cheating appealing: They can reduce pressure by having more-frequent, lower-stakes assessments. Chronicle of Higher Ed

Unlike accusations of plagiarism, AI cheating has no source document to reference as proof. “This leaves the door open for teacher bias to creep in.” Washington Post

Despite their positive attitude towards AI, many students (in a survey say they) feel anxious and lack clear guidance on how to use AI in the learning environments they are in. It is simply difficult to know where the boundary for cheating lies. Neuroscience News

While the AI-detection feature could be helpful in the immediate term, it could also lead to a surge in academic-misconduct cases, Eaton said. Colleges will have to figure out what to do with those reports at a moment when professors have yet to find consensus on how ChatGPT should be dealt with in their classrooms. Chronicle of Higher Ed

“Do you want to go to war with your students over AI tools?” said Ian Linkletter, who serves as emerging technology and open-education librarian at the British Columbia Institute of Technology. “Or do you want to give them clear guidance on what is and isn’t okay, and teach them how to use the tools in an ethical manner?” Washington Post

Even if detection software gets better at detecting AI generated text, it still causes mental and emotional strain when a student is wrongly accused. “False positives carry real harm,” he said. “At the scale of a course, or at the scale of the university, even a one or 2% rate of false positives will negatively impact dozens or hundreds of innocent students.” Washington Post 

On many campuses, high-course-load contingent faculty and graduate students bear much of the responsibility for the kinds of large-enrollment, introductory-level, general-education courses where cheating is rampant. How can large or even mid-sized colleges withstand the flood of nonsense quasi-plagiarism when academic-integrity first responders are so overburdened and undercompensated? Chronicle of Higher Ed

Bruce Schneier, a public interest technologist and lecturer at Harvard University’s Kennedy School of Government, said any attempts to crackdown on the use of AI chatbots in classrooms is misguided, and history proves that educators must adapt to technology. Washington Post

Harsh punishments for cheating might preserve the status quo, but colleges generally give cheaters a slap on the wrist, and that won’t change. Unmonitored academic work will become optional, or a farce. The only thing that will really matter will be exams. And unless the exams are in-person, they’ll be a farce, too. Chronicle of Higher Ed

“I think we should just get used to the fact that we won’t be able to reliably tell if a document is either written by AI — or partially written by AI, or edited by AI — or by humans,” computer science professor Soheil Feizi said. “We should adapt our education system to not police the use of the AI models, but basically embrace it to help students to use it and learn from it.” Washington Post

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments            

22 examples of teaching with AI                                                           

27 thoughts on teaching with AI   

13 thoughts on the problems of teaching with AI                                               

6 Ethical Questions to Think about if you use Generative AI

1. The image below recently won one of the world’s most prestigious photography competitions.

The artist said it was “co-produced by the means of AI (artificial intelligence) image generators.” He wrote, “Just as photography replaced painting in the reproduction of reality, AI will replace photography. Don’t be afraid of the future. It will just be more obvious that our mind always created the world that makes it suffer.”

Do you agree? What role should AI have in the creation of images, not only in contests but by those producing media for companies, schools, and even churches?

2. If a painting, song, novel or movie that you love was generated by an AI, would you want to know? Would it change your reaction if you knew the creator was a machine?  

3. Would it be ethical for a chatbot to write a PhD thesis, as long as the student looks over and makes refinements to the work? What percent of rewriting would be the minimum to make this acceptable?

4. Is it OK for AI to brainstorm ideas for projects or products that you later claim as your own? Would it change your answer if you came up with the original question? What if you fine-tuned some of the ideas? What if you give the AI some credit for helping you?

5. If you use AI and it plagiarizes an artist or writer, who should be blamed? Would your answer change if you were not aware the AI had committed the plagiarism? How might you prove that you were unaware?

6. How do you draw the ethical line for using a chatbot like ChatGPT? Would it be OK for writing an email to schedule a meeting? A sales pitch to a client? A religious sermon? A conversation in an online dating app? A letter to a friend going through depression?

There are more ethical questions for AI in this Wall Street Journal Article

The Coming Textpocalypse

From a piece of genre fiction to your doctor’s report, you may not always be able to presume human authorship behind whatever it is you are reading. Writing, but more specifically digital text—as a category of human expression—will become estranged from us. 

Am I worried that ChatGPT could have (written this article) better? No. But I am worried it may not matter. Swept up as training data for the next generation of generative AI, my words here won’t be able to help themselves: They, too, will be fossil fuel for the coming textpocalypse.

Matthew Kirschenbaum writing in The Atlantic

7 Free Webinars about AI, journalism, writing, & more

Monday, March 13 - AI & the Future of Journalism

What: We will examine the AI field and its impact on journalism, for good and for ill. What features of AI hold the most peril for journalists? Which hold the most promise? How does an AI program actually work? Does AI threaten journalism jobs? Can journalists investigate AI tools? If so, how? Are there practical hacks for determining whether content is AI-generated or real?

Who: Garance Burke, global investigative reporter, The Associated Press; Daniel Verten, head of creative at Synthesia; Emilia Diaz Struck, International Consortium of Investigative Journalists; Justin Gluska, founder of Gold Penguin and AI technology blogger.

When: 6:30 pm, Eastern

Where: Zoom

Cost: Free but registration is required

Sponsor: The Deadline Club

More info

 

Tue, March 14 -  Elevating Your Nonprofit's Online Presence: Best Website Practices for 2023

What: The trends and strategies that will help your organization stand out in the digital landscape. Whether you're a new or a well-established nonprofit, this webinar has everything you need to take your website to the next level. From mobile-first design to accessibility compliance, we'll cover all the essential elements that go into creating a seamless and satisfying online experience for your users.

Who: Erin Mastrantonio of Elevation which designing websites for nonprofits.

When: 11 am, Pacific

Where: Zoom

Cost: Free

Sponsor: The Nonprofit Learning Lab

More info

 

Fri, March 17 - Firewalls & Journalism: What to know about Internet shutdown trends

What: Join us for a virtual panel discussion that will delve deep into the worrying spread of Internet kill switches and what it specifically means for a free press.  

Who: Ksenia Ermoshina, a senior researcher at Citizen Lab at the University of Toronto and Center for Internet and Society; Natalia Krapiva, tech-legal counsel for digital rights watchdog group Access Now; Nat Kretchun, senior vice president for programs at the Open Technology Fund; Moderator: Rachel Oswald, National Press Club press freedom team lead and a foreign policy reporter for CQ Roll Call

When: 11:30 am, Eastern

Where: Zoom

Cost: Free

Sponsors: National Press Club & The Journalism Institute

More info

 

Sat, March 18 - Ask an Editor: The Craft & Business of Writing

What: We will answer questions and offer insights into the craft of writing and the struggles we all face in these uncertain times.  

When: 10 am, Pacific

Where: Zoom

Cost: Free

Sponsor: Allegory Editing

More info

 

Tues, March 21 - The state of digital publishing: metrics, insights and revenue strategies

What: A look at the current state of digital publishing and what will it look like in 2023 and beyond with information from Pugpig’s State of the Digital Publishing Market report. Including: Comparison of reader engagement across users and platforms, case studies of innovation in digital publishing, the use of audio and how it drives engagement, and how news publishers intend to retain readers in 2023.

Who: Jonny Kaldor, founder and CEO, Pugpig, a digital publishing platform for hundreds of news, consumer, specialist and B2B media brands.

When: Noon, Eastern

Where: Zoom

Cost: Free

Sponsor: America’s Newspapers to take a deeper look into the report.

More info

Tues, March 21 - AI-Generated Art: Boom or Bust for Human Creativity?

What: A discussion on how generative AI works, how artists are using these tools, and whether AI-generated art will be a boom or bust for human creativity.

Who: Ahmed Elgammal, Professor, Rutgers University; Patrick Grady, Policy Analyst, Center for Data Innovation; Marian Mazzone, Associate Professor, College of Charleston; Irene Solaiman, Director of Policy, Hugging Face; Brigitte Vézina, Director of Policy, Creative Commons 

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: The Center for Data Innovation

More info

 

Wed, March 22 - ChatGPT, Journalism, and the Future of Creativity

What: What happens when leading journalists who cover science and eminent scientists who reach mass audiences get together to exchange ideas? What do their differing perspectives tell us about how science communication is changing and how we can do it better?

Who: Joanna Stern writes and makes videos at the Wall Street Journal, where she is the senior personal technology columnist. She won an Emmy in 2021. Jean Oh is an associate research professor at Carnegie Mellon University who builds robots with advanced artificial intelligence capabilities.

When: 6:30 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Journalism Institute at New York University

More info

 

Thu, March 23 - Undaunted: How Women Changed American Journalism

What: A discussion of the inspiring stories of pioneering women journalists. You’ll hear about the challenges they faced and how they paved the way for the next generation.

Who: Brooke Kroeger, Kim Todd, and Knopf editor Jonathan Segal about their book “Undaunted”

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Journalism Online

More info

A dozen January webinars about journalism, content creation, grammar, credibility, LinkedIn & more

Tues, Jan 17 - Prioritizing mental health in the newsroom: How to create a culture of well-being

What: In this session, four experts will draw from their personal and professional experiences to share advice on how newsrooms can make mental well-being a part of their culture.

Who: Moderator: Naseem Miller of The Journalist’s Resource; Scott Blanchard, the director of journalism at public media station WITF in Harrisburg, Pa.; Sewell Chan EIC of The Texas Tribune; Elana Newman, McFarlin Professor of Psychology at The University of Tulsa; Dave Seglins is an investigative journalist and “Well-being Champion” with CBC News.

When: Noon, Eastern

Where: Zoom

Cost: Free

Sponsor: The Journalist’s Resource, a project of Harvard's Shorenstein Center

More info

 

Wed, Jan 18 - Five Biggest Mistakes in Content Creation

What: In this webinar, team members from TechSoup will share some common mistakes that can become BIG mistakes when creating content for marketing your nonprofit to constituents.

When: 10 am, Pacific

Where: Zoom

Cost: Free

Sponsor: TechSoup

More info

 

Wed, Jan 18 - Polishing Your Materials to Land that Scholarship, Internship or Job!

What: Panelists will share best practices on how to make your application materials shine.

Who: Aisha Al-Muslim, Senior Editor, Newsroom Internships, The Wall Street Journal; Maya Nairobi Brown, the first recipient of the NAHJ Ada Lourdes Vigo Afro-Latino Scholarship

When: 6 pm, Central

Where: Zoom

Cost: Free

Sponsor: National Association of Hispanic Journalists

More info

 

Thu, Jan 19 - Grammar Recharge for Journalists

What: Panelists will identify commonly found grammatical mistakes -- and how to correct them -- and will share their advice for writing in an error-free and easy-to-understand manner.

Who: Cecilia Dowd, the vice president of the Press Club of Long Island, will moderate panelists John McIntyre and Pam Robinson, both experienced journalists and copy editors. Guests will have the opportunity to have their burning grammar questions answered.

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Press Club of Long Island

More info

 

Mon, Jan 23 - We regret the error: Public trust and media accountability 

What: The panel will explore how newsroom leaders discuss past mistakes and what they’ve learned to build greater trust with their audiences. A recent Gallup survey found that the public’s confidence in news has fallen to an all-time low. What can news organizations do to regain trust in their work? What lessons have newsroom leaders learned from past mistakes to help their audiences understand the lengths they go through to produce credible and trustworthy news?

Who: The panel discussion will be moderated by News Literacy Project President and CEO Charles Salter and featuring top journalists and media experts, including: Tony Cavin, NPR, Managing Editor for Standards and Practices; Sara Kehaulani Goo, Axios, Editor in Chief; Adam Symson, The E.W. Scripps Company, President & CEO; Krissah Thompson, Washington Post, Managing Editor.  

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The News Literacy Project and The E.W. Scripps Company

More info

 

Tues, Jan 24 – Trust issues: How Chicago news outlets build credibility in their communities

What: We are exploring why trust in the news media is declining among some Americans and presenting a panel of innovative Chicago journalists to discuss how they are building trust in their work. These journalists are involving their audiences in their editorial processes, hosting listening sessions, rethinking subscription models, prioritizing transparency with sources, and training young journalists and community members.

Who: The panelists include: Alejandra Cancino, deputy editor of City Bureau; Jennifer Kho, executive editor of the Chicago Sun-Times; Jen Sabella, co-founder and director of strategy of Block Club Chicago; Michael Spikes, lecturer at Northwestern Medill and director of the Teach for Chicago Journalism Program.

When: 6 pm

Where: Zoom and in-person

Cost: Free

Sponsor: The News Literacy Project and Northwestern Medill

More info

 

Tues, Jan 24 - The News Deserts in Our Backyard

What: This panel will present the early results of a nearly yearlong Hofstra University Presidential Research Project examining news deserts within our own backyard and continue a conversation that we have already begun on how to address the crisis in local news coverage, particularly within marginalized communities.

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Press Club of Long Island

More info

 

Wed, Jan 25 - Visual Storytelling: the challenges faced by journalists working with images

What: Hear from an expert panel about the challenges that come with visual journalism—about their experiences of not only pursuing visual journalism, but defending those who do.

Who: Galina Arapova, Media Lawyer, Russia, Mass Media Defence Centre; Zach, Cartoonist, The Philippines, Cartooning for Peace; Matthew Gillett, Senior Lecturer in Law & Principal Investigator, Digital Verification Unit, University of Essex.

When: 8 am, GMT

Where: Zoom

Cost: Free

Sponsor:  Media Defence

More info

 

Wed, Jan 25 - Rock Your LinkedIn Profile

What: Explore how to create a LinkedIn profile that brings your personal career story to life, whether you’re just starting out, seeking to advance, or making a career change.

Who: Gorki De Los Santos and Francisco Tobon from LinkedIn

When: 3 pm, Central

Where: Zoom

Cost: Free

Sponsor: National Association of Hispanic Journalists

More info

 

Wed, Jan 25 - Power in Art: Elevate Student Voice with Editorial Cartooning

What: Learn how to support students as they analyze and create political cartoons to share their perspective about issues. You’ll also explore topics such as trust and credibility as they relate to opinion journalism.

Who: Hear from renowned political cartoonist Lalo Alcaraz and educators who have done editorial cartooning assignments with students.

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The News Literacy Project and KQED

More info

 

Thu, Jan 26 - Celebrate National News Literacy Week with Teens for Press Freedom

What: A student-led conversation about the overwhelming amount of information (including misinformation) that we encounter -- and how you can know what to trust. You’ll hear practical tips and advice from young people about how they separate fact from fiction, how they determine credibility and how they decide what to share with friends or base opinions on.  

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Microsoft Flip, the News Literacy Project and Teens for Press Freedom

More info

 

Tue, Jan 31 - What’s the Point of APA Style? Relevance of a Writing Style to Life After Graduation

What: How librarians can increase students’ buy-in to learning APA Style by demonstrating how learning and using a writing style will help students in any career they pursue. We will share real-life examples of applications of APA Style, showing how the APA Style core principles of writing with clarity, precision, and inclusion support people in tasks such as writing resumes and emails, creating PowerPoint presentations and other visual displays, and, above all, communicating with others to get important stuff done.

Who: The APA Style team

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: APA Style

More info

23 AI-based text/image/video creation tools

ArtBreeder
Create portraits and landscape images with this AI-based creation tool. Free. An introduction and review here.

ChatGPT
This OpenAI chatbot remembers what you've written or said, so the interaction has a dynamic conversational feel. Give the software a prompt and it creates articles, even poetry. It writes code, too. And explains the code. Doesn’t do legit sources and limited to info from before 2022. Free.

Co-Pilot
Released by Microsoft-owned GitHub built on OpenAI technology that can translate basic human instructions into functional computer code. Intended for developers.

Copy.ai
AI text-generation tool.

Craft.do
Document-creation tool with AI features.

DALL-E
OpenAI’s tool that turns written text into realistic images using AI. Named after painter Salvador Dali and Disney Pixar’s WALL-E. A limited number of images are free.

Google Pinpoint
This tool uses AI to analyze PDFs, strip text from images and transcribe audio.

GPT-2 Detector
OpenAI’s GPT-2 Detector (Hugging Face) is a tool that helps to identify AI generated text.

GPTZero
Detects whether an essay was written by ChatGPT to help educators to combat AI-based plagiarism Built by a computer science student at Princeton who was a former data journalist with the BBC.

Jasper AI
AI story writing tool for fiction and nonfiction. Pick a tone of voice for style. Pre-built templates available. However, no sources are provided. $29 month.

Lensa
Create digital self-portraits, made with AI technology through the open source Stable Diffusion model that renders selfies into artwork. Developed by Prisma Labs, locataed in California by Russian developers. One week free trial then $30 a month. The avatar tool costs a separate $3.99 for 50 images.

Lex
AI text-generation tool.

Make-A-Video
Meta’s AI system that turns text into video. Not yet available to the public.

Maker
Generates written and visual content. Free trial then $25 a month.

MidJourney
This AI image generator uses machine learning to create pictures based on text. Created a picture that sparked a controversy by winning an art competition.

Munch
Uses AI to repurpose video content for social channels and more.

NightCafe
Create art with the help of Artificial Intelligence.

NovelAI
AI story creator. Easy-to-use but fantasy fiction only. Starts at $10 a month.

Postwise.ai
AI-driven Twitter writing tool

QuillBot
An AI-driven writing tool that paraphrases what you say.

Stable Diffusion
Generates visual creations through AI. Since it is open-sourced, anyone can view the code. Fewer restrictions on how it can be used than DALL-E.

VanceAI Art Generator
Read more about it here.

VALL-E
Microsoft's AI-powered can replicate someone’s voice with just 3-second sample. The voice synthesis machine learning model is not yet available to the public.

More Tech Tools

The 3 Things Far-Right & Far-Left Political News Sources have in Common

When researchers analyzed almost 6,000 political news stories produced by partisan and nonpartisan media outlets in 2021, three things became clear:

  • Media outlets with extreme biases — regardless of whether it was a conservative or liberal bias — tended to use shorter sentences and less formal language than nonpartisan outlets.

  • Mainstream news organizations, as a whole, wrote at a higher reading level.

  • Far-right and far-left outlets took a more negative tone than nonpartisan outlets. They generally had a lower ratio of positive to negative words.

The researchers describe their findings in a paper forthcoming in Journalism Studies, “At the Extremes: Assessing Readability, Grade Level, Sentiment, and Tone in US Media Outlets.”

Read the full article from Journalist’s Resources here.