Research: What Happens when Workers Use AI

Our AI research findings carry important implications for the future of work. If employees consistently rely on AI for creative or cognitively challenging tasks, they risk losing the very aspects of work that drive engagement, growth, and satisfaction. Increased boredom, which our research showed following AI use, can also be a warning sign that these negative consequences might be on their way. The solution isn’t to abandon gen AI. Rather, it’s to redesign tasks and workflows to preserve humans’ intrinsic motivation while leveraging AI’s strengths. -Harvard Business Review

AI Definitions: Narrow AI

Narrow AI – This is use of artificial intelligence for a very specific task or a limited range of tasks. For instance, general AI would mean an algorithm that is capable of playing all kinds of board game while narrow AI will limit the range of machine capabilities to a specific game like chess or scrabble. Google Search queries, Alexa and Siri, answer questions by using narrow AI algorithms. They can often outperform humans when confined to known tasks but often fail when presented situations outside the problem space where they are trained to work. In effect, narrow AI can’t transfer knowledge from one field to another. The narrow AI techniques we have today basically fall into two categories: symbolic AI and machine learning.

More AI definitions here.

19 Webinars this Week about AI, Journalism & Media

Mon, June 9 - Generative AI Fundamentals

What: We will explain generative artificial intelligence and discuss its impact. You will gain a basic understanding of its shortcomings, as well as the ways it can be used effectively. We will discuss some of the tools available to you through Duke. You will leave the session understanding how to create prompts that will get you the best results in your conversations with the AI.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Duke University

More Info

 

Tue, June 10 - An Editor's Guide to AI, With Pulitzer Center and Africa Check

What: This workshop will identify different types of AI stories and explore what distinguishes the best media coverage of artificial intelligence. The virtual event is geared toward editors in North America, South America, Africa, and Europe, anyone in charge of directing coverage, commissioning stories, or packaging and producing them. 

Who: Tom Simonite edits technology coverage for The Washington Post from San Francisco; Bina Venkataraman serves as editor-at-large for opinion strategy and innovation at The Washington Post.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Pulitzer Center

More Info

 

Tue, June 10 - AI 101: An Interactive Journey through Core Concepts, from Neural Networks to LLMs

What: In this engaging, hands-on session, you'll get a fun and interactive introduction to AI fundamentals—from understanding how large language models tokenize and process language, to exploring differences between traditional and deep machine learning. Through keyword exercises, mini-games, and thought-provoking prompts, you'll gain the confidence to identify real business challenges and discover where AI can truly make an impact in your organization.

Who: Gary Lamach Vice President, Client Solutions, ELB Learning.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Elb Learning

More Info

 

Tue, June 10 - How to Cover Education Department Cuts in Your Community – June 10 Webinar

What: School superintendents have spent months planning curricula that may be altered by budget cuts, mass layoffs and mandates to eliminate programs promoting diversity, equity and inclusion for students. To help journalists report on these changes and how they’ll affect students and families,

expert panelists will provide context on new federal education policies and tips on finding the local angle in this important national story.

Who: Jill Barshay, writer/editor at Hechinger Report; Noelle Ellerson Ng, associate executive director of advocacy & governance at the School Superintendents Association; Stephen Provasnik, former deputy commissioner of the National Center for Education Statistics at the U.S. Department of Education; Keri Rodrigues, co-founder and president of National Parents Union.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The National Press Foundation

More Info

 

Tue, June 10 - Foundations for Responsible AI Engagement

What: This session will explore what it means to use AI responsibly. We'll discuss how different groups-students, faculty, and professionals-are engaging with AI and unpack challenges facing us all. These include concerns around academic integrity, data privacy, bias, hallucination, and evolving expectations around citation and copyright. Participants will leave with practical strategies for establishing course or departmental policies, modeling responsible AI use, and supporting student AI literacy.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Duke University

More Info

 

Tue, June 10 - Harnessing AI to Improve Your Operations and Fundraising

What: Join us for an insightful webinar on leveraging AI to enhance fundraising outreach, development operations, and data management. Learn how AI & Automation can streamline donor engagement, personalize outreach, and optimize data processes to drive more effective fundraising efforts. We’ll explore practical applications, best practices, and real-world examples to help your nonprofit maximize efficiency and impact. Don’t miss this opportunity to revolutionize your fundraising strategy with AI! 

Who: Terry Cangelosi is a Senior Director and Head of Operations at Orr Group; Abby Carlson is a Director and the Head of Data Analytics & Management at Orr Group; Dani Cluff is the Channel Marketing Coordinator at Bloomerang.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Bloomerang

More Info

 

Wed, June 11 - Should Policymakers Regulate Human-AI Relationships?

What: A discussion on the potential benefits and risks of AI companions, what the early research says about this emerging technology, and how policymakers can support responsible innovation.

Who: Alex Ambrose, Policy Analyst Moderator; Taylor Taylor Barkley, Director of Public Policy Abundance Institute; Melodi Dinçer Policy Counsel Tech Justice Law Project; Cathy Fang, PhD Student MIT Media Lab; Clyde Vanel, Assemblyman (D-NY).

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Information technology & Innovation Foundation

More Info

 

Wed, June 11 - Boost Your Workflow with AI: Productivity Tips and Strategies

What: This session will provide participants with practical techniques for utilizing generative artificial intelligence to help with everyday work tasks. AI can help summarize meeting minutes, draft emails, brainstorm ideas, create images for slides, etc. We will provide how-to tips to get you started and showcase several useful tools.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Duke University

More Info

 

Wed, June 11 - AI Ethics

What: Artificial Intelligence is transforming the business landscape, offering vast potential and raising critical ethical challenges around bias, transparency, and privacy. This session will explore past missteps, key decision-making pressures, and best practices to ensure responsible, values-driven AI development.

Who: Sagnika Sen, Associate Professor at Penn State Great Valley & expert in AI and data-driven business strategy.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Small Business Development Center at Kutztown University of Pennsylvania

More Info

 

Wed, June 11 - How Nonprofit Professionals Can AI-Proof Their Careers

What: This webinar will help you become AI-literate in the concepts most immediately to impact your career so you can start the process of upskilling now and thrive in your career for years to come.

Who: Heather Mansfield, Founder of Nonprofit Tech for Good

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Nonprofit Tech for Good

More Info

 

Wed, June 11 - Digital Lifeline: Saving At-Risk Data

What: A workshop on identifying, preserving and reporting on government data. In an era where federal data is at risk of disappearing or being altered, this training will equip you with the tools and knowledge to safeguard critical information. ​

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Sunlight Research Center, Data Rescue Project, MuckRock, and the Data Curation Network  

More Info

 

Thu, June 12 - How AI can enhance your journalism

What: How AI can work as your research partner, helping to brainstorm ideas, mine data, uncover angles and streamline workflows.

Who: Harriet Meyer, an experienced financial journalist.

When: 8 am, Eastern  

Where: Zoom

Cost: £5.00

Sponsor: Women in Journalism

More Info

 

Thu, June 12 - How To Take AI From Buzzword to Newsroom Backbone

What: How the Aftonbladet newsroom built an AI hub, trained Prompt Queens, and experimented with everything from editorial copilots to US election chatbots. Some ideas failed fast, others became surprise hits. This is the story of what worked, what didn’t, and what they learned along the way. 

Who: Aftonbladet’s Deputy Publisher and Director of Editorial AI & Innovation Martin Schori.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Online News Association

More Info

 

Thu, June 12 - Dealing with Leaks in the Age of AI and Disinformation

What: Concrete strategies to equip journalists with the tools they need to navigate leaks with integrity, rigor, and security.

Who: Robert Libetti, a journalist and filmmaker who was part of the 2025 Nieman class at Harvard.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Harvard’s Shorenstein Center & The Journalist's Resource.

More Info

 

Thu, June 12 - From Knowing to Doing: Using AI to Power Skills Practice

What: In this highly experiential session, you’ll explore how to create learning experiences that go beyond transferring knowledge to build real skills that lead to behavior change. Through interactive examples—including eLearning scenarios, AI-enabled practice, structured feedback, and safe, realistic rehearsal with AI—you’ll experience first-hand what effective skills practice looks like. You’ll discover how AI can help scale and support these methods, enabling learners to build skills faster and more effectively. You’ll leave with practical strategies to help people not just know what to do, but actually do it.

Who: Danielle Wallace Chief Learning Strategist, Beyond the Sky Custom Learning.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Magazine Network

More Info

 

Thu, June 12 - Maximizing the Impact of Global News Distribution

What: The conversation will feature real-world examples, actionable insights, and expert tips to help you craft messages that resonate across regions, spark journalist interest, and track the performance of your campaigns.

Who: Kelvin Chan, Business Writer, The Associated Press (London); Zoë Clark, Sr. Partner, Head of Media and Influence, Tyto PR (London); Natassia Culp, Global Corporate Communications Lead, Wasabi Technologies (US); John Lerch, Sr. Director, Global Marketing, Tigo Energy (US).

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Business Wire

More Info

 

Thu, June 12 - Closing the Gap: Sales Training for the New Era of News Media

What: Sales in local for-profit and nonprofit news has shifted from selling ad space or pageviews to addressing psychological, strategic, and narrative-driven challenges. This session outlines how news organizations can adopt a sales approach rooted in sponsor alignment and value-based storytelling. Participants will learn how effective sales professionals operate as consultants who understand emotional decision-making, anticipate objections, and build trust across segmented and evolving markets. The session will focus on tailored communication strategies grounded in clarity, relevance, and long-term impact. 

Who: Richard Brown, the Chief of Growth and Innovation at Wisconsin Watch.

When: 1 pm, Eastern

Where: Zoom

Cost: $35

Sponsor: Online Media Campus

More Info

 

Thu, June 12 - Building Trust Through Authentic Storytelling 

What: A data-backed look at how brands can use authentic, inclusive visuals to bring their sustainability stories to life. Backed by new VisualGPS research and real-world examples, this session will explore how to translate complex concepts into powerful creative that resonates across channels.

Who: Tristen Norman, head of creative, Americas, Getty Images;  Tawnya Crawford, VP and general manager of custom solutions, Getty Images.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Getty Images

More Info

 

Thu, June 12 - Upholding Academic Integrity: How Libraries Are Leading the AI Conversation

What: This webinar will explore how libraries are expanding their roles to proactively foster integrity and support their institutions.

Who: Jason Openo, Dean, School of Health and Community Services Medicine at Hat College; Josh Seeland, Manager of Library Services at Assiniboine College; Jane Costello, Senior Instructional Design Specialist, the Centre for Innovation in Teaching and Learning, Memorial University of Newfoundland.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Springer Nature

More Info

The irrational ideas that motivate anger

According to Albert Ellis, the most common irrational ideas behind anger are the following:

1. Others must treat me considerately and kindly and in precisely the way I want them to treat me.

2. I must do well and win the approval of others or else I will rate as a rotten person.

3. The world and the people in it must arrange conditions under which I live, so that I get everything I want when I want it.

As their anger slows down, people should challenge irrational thoughts with statements such as:

What evidence exists for this? Why can't I stand this noise or this unfairness?

Gary Collins, Counseling and Anger

"Current AI Detectors are Not Ready"

"A new study of a dozen A.I.-detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as A.I.-generated about 6.8 percent of the time, on average.  'At least from our analysis, current detectors are not ready to be used in practice in schools to detect A.I. plagiarism,' said Soheil Feizi, an author of the paper and an associate professor of computer science at Maryland."  -New York Times


Academic Leaders Disagree on Students using AI

“What constitutes legitimate use of AI and what is out of bounds? Academic leaders don’t always agree whether hypothetical scenarios described appropriate uses of AI or not: For one example—in which a student used AI to generate a detailed outline for a paper and then used the outline to write the paper—the verdict (in a recent survey) was completely split.” -Inside Higher Ed

AI Definitions: Neural Networks

Neural Networks (or artificial neural networks, ANNs) Mathematical systems that can identify patterns in text, images and sounds. In this type of machine learning, computers learn a task by analyzing training examples. It is modeled loosely on the human brain—the interwoven tangle of neurons that process data and find complex associations. While symbolic artificial intelligence has been the dominant area of research for most of AI’s history with artificial neural networks, most recent developments in artificial intelligence have centered around neural networks. First proposed in 1944 by two University of Chicago researchers (Warren McCullough and Walter Pitts), they moved to MIT in 1952 as founding members of what’s sometimes referred to as the first cognitive science department. Neural nets remained a major research area in neuroscience and computer science until 1969. The technique enjoyed a resurgence in the 1980s, fell into disfavor in the first decade of the new century, and has returned stronger in the second decade, fueled largely by the increased processing power of graphics chips. Also, see “Transformers.”

More AI definitions here.

I’m just going to go for it

A North Dakota plumber had signed up to run his first half-marathon. But on the morning of the run Mike Kohler was sleepy. He wasn’t used to getting up so early. And he was wearing headphones, so he took off 15 minutes before he was supposed to—putting him with the runners competing in the full marathon. He started seeing signs that indicated he was on the wrong route, but he just assumed the two paths overlapped along the way.  

Eventually, he realized his mistake but kept going. At the 13 mile mark he seriously thought about quitting. He had run as far as he had planned to run and even beat his time goal. He had nothing more to prove.

Instead, he finished the marathon. 

“I’m just going to go for it, because why not?” Mike later told the Grand Forks Herald. “I’m already here, I’m already running, I’m already tired. Might as well try to finish it.” 

He added, ”This just kind of proves you can do a lot more than what you think you can sometimes.” 

AI Survival Instincts

"An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down. No one programmed the AI models to have survival instincts. It’s happening in the same models that power ChatGPT conversations, corporate AI deployments and, soon, U.S. military applications. OpenAI models have been caught faking alignment during testing. Anthropic has found them lying about their capabilities to avoid modification." -Wall Street Journal

If ChatGPT was a College Student

“We found ChatGPT technology can get an A on structured, straightforward questions. On open-ended questions it got a 62, bringing ChatGPT's semester grade down to an 82, a low B. The study concludes that a student who puts in minimal effort, showing no effort to learn the material, could use ChatGPT exclusively, get a B and pass the course. The passing grade might be the combination of A+ in simple math and D- in analysis. They haven't learned much.” -Phys.org

The Four skills of Daring Leadership

One of the most important findings of my career is that daring leadership is a collection of four skill sets that are 100 percent teachable, observable, and measurable. It’s learning and unlearning that requires brave work, tough conversations, and showing up with your whole heart. Easy? No. Because choosing courage over comfort is not always our default. Worth it? Always. We want to be brave with our lives and our work. It’s why we’re here.

Brené Brown, Dare to Lead 

17 Articles about AI & Academic Scholarship

Can generative AI replace humans in qualitative research studies? - Techxplore

The recent reduction in spelling error rates in academic papers could be due to an increased use of LLMs – OSF Preprints  

AI linked to explosion of low-quality biomedical research papers - Nature 

Flood of AI-assisted research ‘weakening quality of science'” – Times Higher Ed

Shoddy study designs and false findings using a large public health dataset portend future risk of exploitation by AI and paper mills – PLOS Biology

Is it OK for AI to write science papers? Nature survey shows researchers are split - Nature

MIT Says It No Longer Stands Behind Student’s AI Research Paper – Wall Street Journal  

Meta releases new data set, AI model aimed at speeding up scientific research – Semafor

Experiment using AI-generated posts on Reddit draws fire for ethics concerns – Retraction Watch

AI-Reddit study leader gets warning as ethics committee moves to ‘stricter review process’ – Retraction Watch  

Why misuse of generative AI is worse than plagiarism – Springer

Science sleuths flag hundreds of papers that use AI without disclosing it - Nature

Google engineer withdraws preprint after getting called out for using AI – Retraction Watch

Scientific Data Fabrication and AI—Pandora’s Box – JAMA Network

AI summary ‘trashed author’s work’ and took weeks to be corrected – Times Higher Ed

AI language models increasingly shape economics research writing, study finds – Phys.org

Artificial intelligence in vaccine research and development: an umbrella review – Frontiers

My Me-ness

“I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.” The student said he felt crushed. Some heads nodded. But not all. Julia, a senior in the history department, jumped in. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.” - D. Graham Burnett writing in The New Yorker

"Madness" on Campus

On campus, we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. -D. Graham Burnett writing in The New Yorker