8 insightful quotes about AI Bias

In an analysis of thousands of images created by Stable Diffusion, we found that image sets generated for every high-paying job were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.” Most occupations in the dataset were dominated by men, except for low-paying jobs like housekeeper and cashier. Bloomberg

Eight years ago, Google disabled its A.I. program’s ability to let people search for gorillas and monkeys through its Photos app because the algorithm was incorrectly sorting Black people into those categories. As recently as May of this year, the issue still had not been fixed. Two former employees who worked on the technology told The New York Times that Google had not trained the A.I. system with enough images of Black people. New York Times

MIT student Rona Wang asked an AI image creator app called Playground AI to make a photo of her look "professional." It gave her paler skin and blue eyes, and "made me look Caucasian." Boston Globe 

We have things like recidivism algorithms that are racially biased. Even soap dispensers that don’t read darker skin. Smartwatches and other health sensors don’t work as well for darker skin. Things like selfie sticks that are supposed to track your image don’t work that well for people with darker skin because image recognition in general is biased. The Markup

AI text may be biased toward established scientific ideas and hypotheses contained in the content on which the algorithms were trained. Science.org

No doubt AI-powered writing tools have shortcomings. But their presence offers educators an on-ramp to discussions about linguistic diversity and bias. Such discussions may be especially critical on U.S. campuses. Inside Higher Ed

Major companies behind A.I. image generators — including OpenAI, Stability AI and Midjourney — have pledged to improve their tools. “Bias is an important, industrywide problem,” Alex Beck, a spokeswoman for OpenAI, said in an email interview. She declined to say how many employees were working on racial bias, or how much money the company had allocated toward the problem. New York Times

As AI models become more advanced, the images they create are increasingly difficult to distinguish from actual photos, making it hard to know what’s real. If these images depicting amplified stereotypes of race and gender find their way back into future models as training data, next generation text-to-image AI models could become even more biased, creating a snowball effect of compounding bias with potentially wide implications for society. Bloomberg

Making people confirm our favored conclusions

Most of us have ways of making other people confirm our favored conclusions without ever engaging them in conversation. Consider this: To be a great driver, lover, or chef, we don’t need to be able to parallel park while blindfolded, make ten thousand maidens swoon with a single pucker, or create a pâte feuilletée so intoxicating that the entire population of France instantly abandons its national cuisine and swears allegiance to our kitchen. Rather, we simply need to park, kiss, and bake better than most other folks do. How do we know how well most other folks do? Why, we look around, of course—but in order to make sure than we see what we want to see, we look around selectively.

For example, volunteers in one study took a test that ostensibly measured their social sensitivity and were then told that they had flubbed the majority of the questions. When these volunteers were then given an opportunity to look over the test results of other people who had performed better or worse than they had, they ignored the test of the people who had done better and instead spent their time looking over the tests of the people who had done worse.

The bottom line is this: The brain and the eye may have a contractual relationship in which the brain has agreed to believe what the eye sees, but in return the eye has agreed to look for what the brain wants.

Daniel Gilbert, Stumbling on Happiness

10 Webinars THIS WEEK about media literacy, AI, investigative journalism, video trends, newsletters, bias, & more

Mon, April 17 – Can Media Literacy Help Us See Through the AI Hype?

What: We will identify the logics and assumptions baked into generative AI tools, and examine the tech ecosystem from which these tools emerge. We will consider: How might we use the core principles of media literacy to help us critically question generative AI technologies—not just the content that they produce, but the tools themselves?

Who: Michelle Ciccone is a second year PhD student in the Department of Communication at the University of Massachusetts Amherst. Previously, she was a K-12 technology integration specialist. Michelle is also the co-organizer of the 2023 and 2020 Northeast Media Literacy Conferences.

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Tue, April 18 - Media Law Litigation in a Post-Newspaper Future

What: For a century, U.S. courts and policymakers have assumed that a well-funded Fourth Estate would act as a check on abuses of government power, assuring that incursions on fundamental First Amendment rights would not go unchallenged. But with the alarming spread of “news deserts” engulfing the United States, is that assumption still valid? And if not, what – if anything – can replace local newspapers as sentinels over government secrecy and overreach.

Who: - RoNell Andersen Jones – Lee E. Teitelbaum Endowed Chair and Professor of Law, University of Utah S.J. Quinney College of Law; Affiliated Fellow, Information Society Project, Yale Law School; Lyrissa Barnett Lidsky – Raymond & Miriam Ehrich Chair in U.S. Constitutional Law, University of Florida Levin College of Law; Katie Fallow – Senior Counsel, Knight First Amendment Institute, Columbia University;  Frank D. LoMonte (Moderator), Counsel, CNN; Co-Chair, Free Speech and Free Press Committee, ABA Section of Civil Rights and Social Justice

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: The American Bar Association

More Info

 

Tue, April 18 – Top Video Trends: What businesses are creating in 2023

What: In this webinar, we’ll share examples of how businesses are using video to engage their audiences. Then, we’ll walk you through 6 different videos and tips for making them the easy way. You’ll understand the elements that go into creating videos that resonate with your audience. On top of that, you’ll walk away with the skills, confidence, and inspiration to create them yourself. If you’re interested in video but not sure where to start, you’re in the right place. This webinar is for those in HR, operations, marketing, sales, and anyone who wants to use video to communicate.

Who: Sally Sangood, Chief Video Officer, Animoto

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Animoto

More Info

 

Tue, April 18 - The Media Landscape in Israel

What:  We look at the media situation in Israel. How have new and politically biased news outlets affected the public’s trust in the media. How has increased government intervention affected press freedom in the country? And what is the impact of Prime Minister Benjamin Netanyahu’s government restructuring proposals on press freedom and independence in Israel.

Who: Ruth Margalit, contributing writer for the New York Times Magazine and Tel Aviv resident

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: Society of Professional Journalists

More Info

Wed, April 19 - Understanding news media bias

What: How the journalistic standards can be applied to confront and avoid bias in news reporting.

Who: Journalists Amethyst J. Davis of the Harvey World Herald and Stephanie Casanova of Signal Cleveland

When: 4 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: News Literacy Project  

More Info

 

Wed, April 19 - Ethical Decisions in News Graphic Images

What: A robust discussion about the use and impact of graphic images in news reporting. The presenter will share noteworthy examples and offers a decision-making checklist.

Who: Eric Wishart, standards & ethics editor, Agence France-Presse.

When: 8 pm, Central

Where: Zoom

Cost: Free

Sponsor: The Valley of the Sun chapter of the Society of Professional Journalists

More Info

 

Wed, April 19 - Covering Homelessness 

What: A panel discussion on covering homelessness with respect and sensitivity.

Who: Will Schick, editor-in-chief of Washington, D.C.’s Street Sense Media; Sophie Kasakove, a housing reporter who previously covered housing and climate issues as a fellow on the national desk at the New York Times; Hallie Miller covers city and regional services for the Baltimore Banner who previously worked at The Baltimore Sun.

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Society of Professional Journalists, Region 2

More Info

 

Thu, April 20 - Newsletter Professionals Meetup: Growth, Partnerships and Revenue

What: This is an idea swap for newsletter professionals about growth strategies, marketing, partnerships and revenue. You’ll meet up with others working on newsletter strategy and share ideas in a series of fast-paced breakout discussions. Bring an idea or two to share, and expect to leave with a bunch of new ideas from your peers, plus new professional opportunities, new peers, mentors and colleagues, and inspiration for a collaboration or partnership.   

Who: Melanie Winer  VP of Newsletter Strategy & Operations, Insider, Inc.

Where: Zoom

Cost: Free for members, $25 for non-members

Sponsor: The Online News Association & Inkwell

More Info

 

Thu, April 20 – Learning Investigative Reporting from Veteran Journalists

What: A panel on investigative reporting.

Who: Newsday’s Sandra Peddie who has written two books and News 12’s lead investigator Walt Kane

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Press Club of Long Island

More Info

 

Fri, April 21 – The Future of Local Journalism

What: The challenges facing local media and solutions that connect Americans with independent journalism.

Who: Media pioneer Evan Smith

When: 3:45 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The University of Virginia's Karsh Institute of Democracy

More Info

Entrenched Opinions

While lack of knowledge is certainly a major source of bias, professional expertise doesn’t fare much better. Whether we are looking at judges, lawyers, professors, scientists, doctors, engineers, architects, writers, journalists, politicians, investors, economists, managers, coaches, consultants, or computer programmers, sharp differences and entrenched opinions are the norm. Deep experience and expertise do not necessarily lead to objective consensus. As behavioral scientists have long noted, subject matter experts tend to:

1.    Rely too much on societal and professional stereotypes

2.    Overvalue their personal experiences, especially recent ones

3.    Overvalue their personal gut feel

4.    Prefer anecdotes that confirm their existing views

5.    Have limited knowledge of statistics and probability

6.    Resist admitting mistakes

7.    Struggle to keep up with the skills and literature in their fields

8.    Burn out and/or make mistakes in demanding work environments

9.    Avoid criticizing, evaluating, or disciplining their peers

10. Become less open-minded over time

For decades, we have seen the unfortunate results of these traits in criminal sentencing, student grading, medical diagnoses and treatments, hiring and salary negotiations, financial services, editorial coverage, athletic evaluations, political processes, and many other areas.

We may think that we are being impartial and fair, but our minds are full of stereotypes, preconceptions, self-interests, confirmation biases, and other discriminatory forces.

David Moschella writing for the Information Technology & Innovation Foundation

The algorithmic feedback loop

Users keep encountering similar content because the algorithms keep recommending it to us. As this feedback loop continues, no new information is added; the algorithm is designed to recommend content that affirms what it construes as your taste.

Reduced to component parts, culture can now be recombined and optimized to drive user engagement. This threatens to starve culture of the resources to generate new ideas, new possibilities. 

If you want to freeze culture, the first step is to reduce it to data. And if you want to maintain the frozen status quo, algorithms trained on people’s past behaviors and tastes would be the best tools.

The goal of a recommendation algorithm isn’t to surprise or shock but to affirm. The process looks a lot like prediction, but it’s merely repetition. The result is more of the same: a present that looks like the past and a future that isn’t one. 

Grafton Tanner, writing in Real Life Magazine

Bias in the Judicial System

When it comes to bail, for instance, you might hope the judges were able to look at the whole case together, carefully balancing all the pros and cons before coming to a decision. But unfortunately, the evidence says otherwise. Instead, psychologists have shown that judges are doing nothing more strategic than going through an ordered checklist of warning flags in their heads. If any of those flags — past convictions, community ties, prosecution's request — are raised by the defendant story, the judge will stop and deny bail. 

The problem is that so many of those flags are correlated with race, gender and educational level. Judges can’t help relying on intuition more than they should; and in doing so, they are unwittingly perpetuating biases in the system. 

Hannah Fry, Hello World

Availability bias

People give their own memories and experiences more credence than they deserve, making it hard to accept new ideas and theories. Psychologists call this quirk the availability bias. It’s a useful built-in shortcut when you need to make quick decisions and don’t have time to critically analyze lots of data, but it messes with your fact-checking skills.

Marc Zimmer writing in The Conversation

Exponential growth bias

Imagine you are offered a deal with your bank, where your money doubles every three days. If you invest just $1 today, roughly how long will it take for you to become a millionaire? Would it be a year? Six months? 100 days? The precise answer is 60 days from your initial investment, when your balance would be exactly $1,048,576. Within a further 30 days, you’d have earnt more than a billion. And by the end of the year, you’d have more than $1,000,000,000,000,000,000,000,000,000,000,000,000 – an “undecillion” dollars.  

If your estimates were way out, you are not alone. Many people consistently underestimate how fast the value increases – a mistake known as the “exponential growth bias.”   

David Robson writing for the BBC

Verification bias

Verification bias refers to a stubborn resistance to accepting the null hypothesis – the assumption that there is no inherent relationship between the variables being studied. The null hypothesis is the default position in experiments. This is what the researcher is attempting to eliminate through experimental investigation. For example, continuing to repeat an experiment until it “works” as desired, or excluding inconvenient cases or results may make the hypothesis immune to the facts. Verification bias amounts to the repression of negative results. 

Augustine Brannigan, The Use and Misuse of the Experimental Method in Social Psychology

What the Bathroom scales can tell you

When our bathroom scale delivers bad news, we hop off and then on again, just to make sure we didn’t misread the display or put too much pressure on one foot. When our scale delivers good news, we smile and head for the shower. By uncritically accepting evidence when it pleases us, and insisting on more when it doesn’t, we subtly tip the scales in our favor. 

Psychologist Dan Gilbert in The New York Times

the strongest political bias of all

The strongest bias in American politics is not a liberal bias or a conservative bias; it is a confirmation bias, or the urge to believe only things that confirm what you already believe to be true. Not only do we tend to seek out and remember information that reaffirms what we already believe, but there is also a “backfire effect,” which sees people doubling down on their beliefs after being presented with evidence that contradicts them. So, where do we go from here? There’s no simple answer, but the only way people will start rejecting falsehoods being fed to them is by confronting uncomfortable truths.

Emma Roller writing in the New York Times

Bullet-riddled Fighter Planes

During World War II, researchers from the non-profit research group the Center for Naval Analyses were tasked with a problem. They needed to reinforce the military’s fighter planes at their weakest spots. To accomplish this, they turned to data. They examined every plane that came back from a combat mission and made note of where bullets had hit the aircraft. Based on that information, they recommended that the planes be reinforced at those precise spots.

Do you see any problems with this approach?

The problem, of course, was that they only looked at the planes that returned and not at the planes that didn’t. Of course, data from the planes that had been shot down would almost certainly have been much more useful in determining where fatal damage to a plane was likely to have occurred, as those were the ones that suffered catastrophic damage.

The research team suffered from survivorship bias: they just looked at the data that was available to them without analyzing the larger situation. This is a form of selection bias in which we implicitly filter data based on some arbitrary criteria and then try to make sense out of it without realizing or acknowledging that we’re working with incomplete data.

Rahul Agarwal writing in Built in

Availability Bias

Have you ever said something like, “I know that [insert a generic statement here] because [insert one single example].” For example, someone might say, “You can’t get fat from drinking beer, because Bob drinks a lot of it, and he’s thin.” If you have, then you’ve suffered from availability bias. You are trying to make sense of the world with limited data.

People naturally tend to base decisions on information that is already available to us or things we hear about often without looking at alternatives that might be useful. As a result, we limit ourselves to a very specific subset of information.

This happens often in the data science world. Data scientists tend to get and work on data that’s easier to obtain rather than looking for data that is harder to gather but might be more useful. We make do with models that we understand and that are available to us in a neat package rather than something more suitable for the problem at hand but much more difficult to come by.

A way to overcome availability bias in data science is to broaden our horizons. Commit to lifelong learning. Read. A lot. About everything. Then read some more. Meet new people. Discuss your work with other data scientists at work or in online forums. Be more open to suggestions about changes that you may have to take in your approach. By opening yourself up to new information and ideas, you can make sure that you’re less likely to work with incomplete information.

Rahul Agarwal writing in Built in

 

Motivated Reasoning 

When we identify too strongly with a deeply held belief, idea, or outcome, a plethora of cognitive biases can rear their ugly heads. Take confirmation bias, for example. This is our inclination to eagerly accept any information that confirms our opinion, and undervalue anything that contradicts it. It’s remarkably easy to spot in other people (especially those you don’t agree with politically), but extremely hard to spot in ourselves because the biasing happens unconsciously. But it’s always there. 

Criminal cases where jurors unconsciously ignore exonerating evidence and send an innocent person to jail because of a bad experience with someone of the defendant’s demographic. The growing inability to hear alternative arguments in good faith from other parts of the political spectrum. Conspiracy theorists swallowing any unconventional belief they can get their hands 

We all have some deeply held belief that immediately puts us on the defensive. Defensiveness doesn’t mean that belief is actually incorrect. But it does mean we’re vulnerable to bad reasoning around it. And if you can learn to identify the emotional warning signs in yourself, you stand a better chance of evaluating the other side’s evidence or arguments more objectively.

Liv Boeree writing in Vox    

We’re hardwired to delude ourselves

When people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. 

If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view. Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow.

Ben Yagoda writing in The Atlantic 

a mental short-cut that can lead us away from truth

Imagine I tell you that a group of 30 engineers and 70 lawyers have applied for a job. I show you a single application that reveals a person who is great at math and bad with people, a person who loves Star Wars and hates public speaking, and then I ask whether it is more likely that this person is an engineer or a lawyer. What is your initial, gut reaction? What seems like the right answer?

Statistically speaking, it is more likely the applicant is a lawyer. But if you are like most people in their research, you ignored the odds when checking your gut. You tossed the numbers out the window. So what if there is a 70 percent chance this person is a lawyer? That doesn’t feel like the right answer.

That’s what a heuristic is, a simple rule that in the currency of mental processes trades accuracy for speed. A heuristic can lead to a bias, and your biases, though often correct and harmless, can be dangerous when in error, resulting in a wide variety of bad outcomes from foggy morning car crashes to unconscious prejudices in job interviews.

David McRaney writing in BoingBoing

The Halo Effect

If you like the president’s politics, you probably like his voice and his appearance as well. The tendency to like (or dislike) everything about a person–including things you have not observed–is known as the halo effect. The term has been in use in psychology for a century, but it has not come into wide use in everyday language. This is a pity, because the halo effect is a good name for a common bias that plays a large role in shaping our view of people and situations. It is one of the ways the representation of the world that system one generates is simpler and more coherent than the real thing.

You meet a woman named Joan at a party and find her personable and easy to talk to. Now her name comes up as someone who could be asked to contribute to a charity. What do you know about Joan's generosity? The correct answer is that you know virtually nothing, because there is little reason to believe that people who are agreeable in social situations are also generous contributors to charities. But you like Joan and you will retrieve the feeling of liking her when you think of her. You also like generosity and generous people. By association, you are now predisposed to believe that Joan is generous. And that you believe she is generous you probably like Joan eve better than you did earlier, because you have added generosity to her pleasant attributes.

The sequence in which we observe characteristics of a person is often determined by chance. Sequence matters, however, because the halo effect increase the weight of the first impressions, sometimes to the point that subsequent information is mostly wasted.

Daniel Kahneman, Thinking, Fast and Slow