Dealing with False Guilt

Here are 4 options when dealing with false guilt.

1. Remove the Source of Guilt (the conscience)
This may only desensitize us to actual wrongs and could lead to a denial of real evil in our lives.

2. Emphasize Self-Potential
This fails to address the underlying problems and ignores any real wrongs.

3. Emphasize Punishment
This can lead to feeling guilt when caught, ignoring legitimate conviction.

4. Emphasize Forgiveness
If the guilt is false, there lacks a basis for forgiveness and the person may feel they haven’t suffered enough.

When AIs are Bribed or Threatened

What’s bizarre about LLMs is that they act more like humans than we think they should. For example, some researchers have tested the hypothesis that LLMs perform better when offered a cash reward or threatened with death. It also turns out that some of the best ways to “jailbreak” LLMs (getting them to disobey their creators’ explicit instructions) look a lot like the kinds of social engineering tricks that humans use on each other: for example, pretending to be someone else or saying that the request is just a joke. But other effective jailbreaking techniques are things no human would ever fall for. One group found that if they used ASCII art (constructions of symbols that look like words or pictures) to pose dangerous questions, like how to build a bomb, the LLM would answer them willingly. - IEEE Spectrum

19 Things People are Trying to Get AI to Do

AI agents’ promise to arrange your finances, do your taxes, book your holidays – and put us all at risk – The Conversation

Coming soon: Ph.D.-level super-agents - Axios

AI could transform health care, but will it live up to the hype? – Science News 

Using AI to talk to animals – Axios  

The Firm That Wants to Power AI With Southern Yellow Pine – Wall Street Journal

Twelve Labs is building AI that can analyze and search through videos – TechCrunch 

Samsung has developed an audio eraser feature for smartphones that will allow users to erase unwanted sounds from videos – Data Country 

Extreme Weather Is Taxing Utilities More Often. Can A.I. Help? – New York Times  

Are AI-created recipes hard to swallow? – BBC

The world-changing ‘killer app’ for AI could be nuclear fusion - Washington Post  

OpenAI starts testing prototype of new AI search tool - Axios

OpenAI working on new reasoning technology under code name ‘Strawberry’ – Reuters

How AI could transform baseball forever – Washington Post

How generative AI could reinvent what it means to play – MIT Tech Review

Communicate with animals, win millions: Inside the wild new world of AI prizes – Semafor

Can AI make better chocolate chip cookie recipes than humans? We taste tested 2 – NPR

Can AI police itself? Experts say chatbots can detect each other’s gaffes. – Washington Post

Digital Twins and AI in Next-Gen Nuclear Reactor Operations  - AI wire

Five Ways AI Will Break Software Development – Inside AI News

24 Articles about the Business of Running an AI Company

Amazon AI deal leaves ‘zombie’ start-up in its wake, whistleblower says – Washington Post

Coming soon: Ph.D.-level super-agents - Axios

Google’s Resolution for 2025: Catch Up to ChatGPT – Wall Street Journal

China's AI keeps getting better — and cheaper – Axios

Joe Biden signs executive order to speed AI data center construction – The Verge

Google is forming a new team to build AI that can simulate the physical world – Tech Crunch

Nvidia sends less powerful AI chips to China – Tom’s Hardware

OpenAI, Google and Runway race to generate AI video – Axios

AI-powered avatars can gesture naturally as they speak – New Scientist

OpenAI says it needs ‘more capital than we’d imagined’ as it lays out for-profit plan - CNBC

Microsoft Is Forcing Its AI Assistant on People—and Making Them Pay - Wall Street Journal

Nonprofit group joins Elon Musk’s effort to block OpenAI’s for-profit transition – Tech Crunch   

Tech dollars flood into AI data centers - Axios

Don’t Look Now, but China’s AI Is Catching Up Fast - Wall Street Journal 

OpenAI Unveils New A.I. That Can ‘Reason’ Through Math and Science Problems – New York Times

Google DeepMind unveils a new video model to rival Sora - TechCrunch  

Your Meta Ray-Ban smart glasses just got a massive AI upgrade – ZDnet  

The year in AI: Catch up on the top AI news of 2024 – Tech Target

Mobile pioneers say they’ll make ‘agentic AI’ a reality with new platform – Semafor  

OpenAI launched a 1-800-ChatGPT hotline, allowing users to interact with its chatbot over the phone - CNN

OpenAI says it has no plans to release an API for Sora - TechCrunch

Databricks co-founder offers $1 million prize to solve AI coding problems – Semafor

Sam Altman on ChatGPT’s First Two Years, Elon Musk and AI Under Trump – Bloomberg  

Google introduces a new quantum computing chip called Willow, designed to be faster and more reliable than previous chips – Tech Crunch

Why We Lie

A life of total dedication to the truth means.. a life of total honesty. It means a continuous and never-ending process of self-monitoring to assure that our communications – not only the words that we say but also the way we say them-invariably reflect as accurately as humanly possible the truth or reality as we know it. Such honesty does not come painlessly. The reason people lie is to avoid the pain of challenge and its consequences.  

M Scott Peck, The Road Less Traveled

14 Webinars This Week about AI, Journalism & Media

Tue, Jan 21 - How to Build a Podcast Media Empire for You and Your Business!

What: In this webinar, you'll learn how to create, grow, and monetize a podcast that boosts your brand, expands your reach, and positions you as an industry leader.  

Who: Jeffrey Hayzlett, Chairman & Founder, C-Suite Network

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Magazine Network

More Info

 

Tue, Jan 21 - Influencers: How Can They Help Advance Media Literacy?

What: Learn about how social media influencers have been recruited to address important issues in media literacy related to health and safety. In Croatia, social media influencers have collaborated with media literacy experts to combat online harassment. In this program, we'll discuss strategies to develop partnerships with popular online influencers that could be leveraged to combat dis/mis/malinformation.

Who: Igor Kanižaj, PhD is Full Professor at Catholic University of Croatia, Department for Communication Science; Marshall S. Rich is a pioneer in the emerging and unique field of Forensic Cyberpsychology; Scott H. Vlachos is the Executive Director of the Council for Emerging National Security Affairs.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Tue, Jan 21 - AI-Powered Nonprofits: 2025 Report on AI Adoption Rates, Use Cases, and Outcomes

What: Gain exclusive insights from our recent survey of over 1,000 nonprofits exploring how AI is reshaping the sector. This webinar will unpack the latest findings on how organizations like yours leverage AI to drive efficiency, enhance engagement, and achieve measurable outcomes. From exploring real-world use cases to understanding adoption trends and overcoming challenges, this session will provide actionable insights to help your nonprofit harness the power of AI for greater impact. Whether considering AI for the first time or looking to optimize your current solutions, join us to learn how to position your organization for success in 2025 and beyond.

Who: Kyle Barkins & Joe DiGiovanni, Tapp Network co-founders.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: TechSoup

More Info

 

Tue, Jan 21 - AI for Trainers

What: Are you ready to dip your toes into AI for training, but feeling a bit intimidated? This session is an introduction for trainers, a friendly starting point. Whether you tap into AI to design innovative training programs, automate tasks, or personalize the learning experience, it all begins with leveraging prompts.

Who: Becky Becky Pike Pluth President and CEO, The Bob Pike Group; William Rintz from UMU an AI-powered performance learning platform.

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Magazine Network

More Info

 

Tue, Jan 21 - Conversation with Reynolds Journalism Institute Project Fellows

What: A conversation with Reynolds Journalism Institute Project Fellows and the projects they are working on.

Who: Andy Lee Roth is directing a project on Algorithmic Literacy for Journalists; Aura Walmer, is building a web-based toolkit that provides journalists with step-by-step guidance for creating data sonification projects; Zoli Csernatony and Dana Amihere are developing DigInThere, a tool to help newsrooms promote more informed, positive engagement in the comments sections.  

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Society of Professional Journalists

More Info

 

Wed, Jan 22 - Artificial Intelligence and Ethics

What: Find out what audiences expect from news when it comes to using AI. Join for a discussion about how AI is being used in photography and what action journalists and policymakers must consider.

Who: Chris Frost, council chair of the National Union of Journalists’ Ethics Council; Felix Simon, research fellow at the Reuters Institute for the Study of Journalism; Nick Dunmur, photographer and Head of Business & Legal at the Association of Photographers; Mathilde Pavis, international expert in intellectual property, data and ethics.

When: 12:30 pm

Where: Zoom

Cost: Free

Sponsor: National Union of Journalists, the voice of journalism in the UK and Ireland for all media, communications and PR workers

More Info

 

Wed, Jan 22 - AI Tools for Video Creation

What: We’ll cover how to use Sora journalistically, and how to disclose its use to readers. We’ll also look at LensGo, FLUX, Google Video, Runway ML and other text-to-video and image-to-video tools. Participants will be given a handout with links to all the tools and exercises on how to use them. Prior to the session, have an account set up at Sora and RunwayML. And have access to a YouTube account if we want to post the videos there afterward.

Who: Mike Reilley  Senior Lecturer, University of Illinois-Chicago.

When: 2 pm, Eastern  

Where: Zoom

Cost: Free for members, $25 for nonmembers

Sponsor: Online News Association

More Info

 

Wed, Jan 22 - Copyright Confusion? Fair Use to the Rescue!

What: Definitive answers about fair use are tough to find, as most copyright decisions are determined by the courts. Yet, library patrons seek guidance about their use of copyrighted materials more frequently than sometimes the reference desk can handle. This webinar offers strategies for helping patrons make their own use decisions, and solidify the foundations on which fair use decisions are made.

Who: Sara Wolf is an associate professor of technology and media at Auburn University.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Niche Academy

More Info

 

Wed, Jan 22 - Storytelling for Impact

What: Tips, techniques and tools to help the modern marketer tell better and more impactful stories to activate their audiences around ideas and actions.

Who: Firespring’s Kiersten Hill

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Firespring

More Info

 

Wed, Jan 22 - The Sports Advisers Are In: Breaking Down Barriers

What: There are many reasons why a sports staff may not be generating quality sports content, but the biggest hurdle is not always poor access to athletes. It might also be due to limited imagination and an over-reliance on sources. There are many other ways to diligently and creatively cover athletics on your campus. Stop in to learn ways to develop relationships with athletes, coaches and athletic departments, to take new approaches, and to develop story ideas that should elevate your sports coverage across any platform.

Who: Joe Gisondi, a student media adviser for more than 20 years, is a professor of journalism, director of student publications at Eastern Illinois University; John DiCarlo is the Managing Director of Student Media and the Claire Smith Center for Sports Media at Temple University's Klein College of Media and Communication; Chris Babb is in his sixth year as an instructor in the Rogers Department of Communication and faculty advisor for the student sports media production program at Ouachita Baptist University in Arkansas.

When: 5 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: College Media Association

More Info

 

Thu, Jan 23 - Introduction to Reporting on AI

What: Designed for reporters interested in getting started but with minimal or no knowledge of AI. We will dissect what makes a good AI accountability story, from quick turnaround stories to more ambitious investigations, as well as digging deeper into a few examples.

Who: Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society. She was formerly a contributing writer at The Atlantic, a foreign correspondent covering China’s technology industry for the Wall Street Journal, and a senior editor for AI at MIT Technology Review.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Pulitzer Center

More Info

 

Thu, Jan 23 - 5 Surprising Ways to Use AI to Empower Your Editorial Teams

What: AI is transforming the way editorial teams work, but are you using it to its full potential? In this free webinar, we’ll explore five surprising ways AI can enhance your editorial processes, from streamlining content creation to improving audience engagement and optimizing workflows. Discover practical applications you might not have considered and learn how to integrate these tools seamlessly into your team’s daily tasks. 

Who: David Arkin, CEO of David Arkin Consulting.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: New England Newspaper & Press Association

More Info

 

Thu, Jan 23 - Expert Panel Discussion: Learning Trends 2025

What: This expert panel will share their insights on the transformative learning technologies poised to shape the future.

Who: Karl Kapp, Director, Institute for Interactive Technologies, Bloomsburg University; Tony O’Driscoll Research Fellow and Academic Director, Duke University; David Metcalf, Ph.D. Director, Mixed Emerging Technology Integration Lab, University of Central Florida; Anders Gronstedt, President, The Gronstedt Group.

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenSesame

More Info

 

Thu, Jan 23 – Podcasting 101

What: An insightful discussion on Podcasting 101, where our expert podcasters will cover everything from choosing the right equipment and software to building an engaged audience, monetizing your content, and much more!

Who: Charlotte Norsworthy Varnum, Executive Director at The Red & Black; Wanda Lloyd, Northwestern School of Management/Maynard Institute; Mitch Leff, President Leff & Associates.

When: 7 pm, Eastern

Where: Zoom

Cost: Free, RSVP at: SPJGeorgia@gmail.com

Sponsor: Society of Professional Journalism

More Info

12 Ways to Spot AI Images or Videos

Tips for determining if an image or video is likely created by AI.

THE BACKGROUND. Are people in the background looking at the unusual thing going on? If they are going about their business, it is likely a fake. Often, the background of AI images will be distorted. Sometimes odd shapes in the background details are giveaways, such as floor tiles or walls. 

OTHER VIDEOS & PHOTOS. If the video or image is of a news event and there are no other videos or images showing different angles, it may be AI-generated. It is unlikely that there would be only a single image or video of something odd or newsworthy.

DETAILS. AI generators are not good at details—like fingers, hands and hair. Many times, AI software will show too many fingers or odd hand placement. Other oddities might be mismatched shoes or earrings, a misshapen jaw, or extra legs.

WRITING. Look closely at writing on a sticker, street sign or billboard. Watch for blurry writing when it shouldn’t be or wrongly formed letters, or the letters that don’t spell words.

GLOSSY. The overly glossy look, similar to some stock photos, can be an AI giveaway. Watch for people with plastic-looking faces.

THE SOURCE. Is the person or organization sharing the image reliable and not known for promoting AI-generated media?

THE EYES. In deepfake videos, people sometimes blink oddly or else they make strange eye movements. Researchers at Cornell University found deepfake faces don’t blink properly. Also, techniques devised for measuring galaxies, researchers have found that deepfake images don't have the same consistency in reflections in both eyes.

THE FACE. Look carefully at the area around the face for evidence that it was swapped onto another person’s body.

THE LIPS. Do the lips have abnormal movements and unrealistic positioning?

MOVEMENT. Watch for unnatural jumps or the absence of motion blur that is typically present in authentic videos. If creators manipulate AI-generated photos using Photoshop techniques such as blurring or file compression, they can fool detection tools.

PATTERNS. AI images often have abnormal patterns in the physics of lighting.

PHOTOMETRIC CLUES. Look at “photometric” clues such as blurring around the edges of objects that might suggest they’ve been added later; noticeable pixelation in some parts of an image but not others; and differences in coloration.

More about spotting fake news

AI Definitions: Explainability

Explainability (or explainable AI; it is similar to but not the same as interpretability or interpretable AI) - While interpretability relates to understanding an AI’s inner workings, explainable AI focus on observed patterns in what the AI does to draw conclusions. Applied after a model has already made its decision or prediction, explainability offers insight into which features or variables played into the outcome in an effort to ensure accuracy, fairness and user trust. Explainability focuses on individual decisions, rather than the model as a whole. Because explainability techniques are applied after the fact, they can be used with any model. On the downside, it can oversimplify a model's decision-making process and make is often difficult for non-experts to understand. Some governments are requiring that AI systems include explainability.

More AI definitions here

False Starts

It is important to distinguish between a real new beginning in someone’s life and a simple defensive reaction to an ending. Each may exert strain on a relationship, but the new beginning must be honored. The defensive reaction is simply a new way of perpetuating the old situation and needs to be considered as such.

Unfortunately, there is no psychological test you can take at such times. It is often difficult to be sure whether some path leads forward or back, and it may be necessary to follow it for a little way to be sure. But there are two signs that are worth looking for before you start. The first is the reaction of people who know you well: not whether they approve or disapprove, but whether they see what you propose to do as something new or simply a replay of an old pattern. The second indication comes from the transition process itself: Have you really moved through endings into the neutral zone and found there the beginning you now want to follow is this “beginning” a way of avoiding an ending or aborting the neutral zone experience?

William Bridges, Transitions

13 Ways to Spot AI Writing

Tips for determining if an article is likely written by AI.

OVERUSED WORDS. AI-written articles tend to come back to the same terms multiple times. Examples would be comprehensive, delve, meticulous, versatile and pivotal. Before 2024, overused AI words in scientific research papers were typically nouns. More recently, researchers say AI excessively uses "style" words—mostly verbs and some adjectives. The phrases AI picks up can often make the text sound more like marketing material than academic scholarship or quality news writing.

TORTURED ACRONYMS. Generative AI will sometimes pick up the wrong words for an acronym. For instance, a data science paper might use "CNN" to refer to "convolutional brain organization" instead of "convolutional neural network.”

NONSENSICAL PARAPHRASES. An academic paper written by AI might have “glucose bigotry” instead of “glucose intolerance,” where it changed a single word and did not recognize the context.  

ACADEMIC CITATIONS. AI-written articles with academic citations have been known to include incorrect or incomplete references. AI writing has been also known to take quotations out of quotation marks, paraphrase them, and delete the citation.

STYLE CHANGES. A sudden change in writing style within an article or essay may indicate that the author’s work was rewritten using AI.

PERFECT GRAMMAR. A typo, particularly in student writing, could indicate the article or essay is not wholly the work of a bot. Mistake-free writing is, ironically, a red flag. However, savvy writing prompts may ask the AI to include some errors in order to mislead inspectors.

MECHANICAL STYLING. AI tends to mechanically repeat expressions that appear often in the internet material that it was trained on. The result is often uninspired and generic prose that often lacks any specific point. 

ARTICLES. AI will make errors in the use of definite and indefinite articles, often because it does not recognize the context to determine whether an article is required and which one. For example, AI editors will often fail to use the definite article before common nouns such as “participants” and “results” when referring to a study.  “Results show that…” is a general reference while “The results show that…” are those of the present study. Generative AI will miss this distinction.

SUBJECT-VERB AGREEMENT. AI often fumbles subject-verb agreement when the verb does not immediately proceed the verb. 

VERB TENSE. Generative AI will confuse when to use past tense and present tense.

LINKING WORDS. AI editors tend to delete words linking sentences and paragraphs, such as “however,” “therefore,” “in contrast,” and “moreover.”

ARCHAIC LANGUAGE. Since older texts from the early twentieth have been more available to use as training data sets for LLMs than current writing samples, some researchers have found overuse of words that were popular then but have since fallen out of common usage as evidence of generative AI.  

PREDICTABILITY. Text is more likely to be human than AI when it includes sarcasm, current pop-references or insults the reader. Writing that doesn't match predictable patterns is more likely to be human generated.

 More about spotting fake news

Taking Action

It doesn’t matter if you have a genius IQ and a PhD in quantum physics, you can’t change anything or make any sort of real-world progress without taking action. There’s a huge difference between knowing how to do something and actually doing it. Knowledge and intelligence are both useless without action. It’s as simple as that.

Successful people know that a good plan executed today is better than a perfect plan executed someday. They don’t wait for the “right time” or the “right day” or the “right (impossible) circumstances”, because they know these reactions are based on a fear and nothing more. They take action here and now, today – because that’s where real progress happens.

Angel Chernoff

AI Hallucinations "can be remarkably useful"

Innovators are finding that A.I. hallucinations can be remarkably useful. “The public thinks it’s all bad,” said Amy McGovern, a computer scientist who directs a federal A.I. institute. “But it’s actually giving scientists new ideas. It’s giving them the chance to explore ideas they might not have thought about otherwise.” A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs.

-William Broad writing in the New York Times