14 Quotes about AI Dangers

The risk is that AI models will inevitably converge on a point at which they all share the same enormous training set collectivizing whatever inherent weaknesses that set might have. AIs don't know what they don't know. And that can be very dangerous. Axios 

The perennial problem is that technology and computing are portrayed in popular media as magic. Even in this Mission Impossible movie, the idea is once the good guys get a key to access the Entity’s source code, the AI can be controlled. That’s a misunderstanding. Even if you had the actual source code of an AI, it wouldn’t tell you what you need to know. -Alex Hanna, director of research at the Distributed AI Research Institute. Washington Post

Experts are raising alarms about the mental health risks and the emotional burden of navigating an information ecosystem driven by AI that's likely to feature even more misinformation, identity theft and fraud. Axios

“If you look at phishing filters, they have to learn first, and by the time they learn, they already have a new set of phishing emails coming,” Srinivas Mukkamala, chief product officer at cybersecurity software company Ivanti, told reporters. “So the chances of a phishing email slipping your controls is very, very high.” Route 55

AI technologies are bad for the planet too. Training a single AI model – according to research published in 2019 – might emit the equivalent of more than 284 tonnes of carbon dioxide, which is nearly five times as much as the entire lifetime of the average American car, including its manufacture. These emissions are expected to grow by nearly 50% over the next five years. The Guardian  

Tools like Amazon’s CodeWhisperer and Microsoft-owned GitHub Copilot suggest new code snippets and provide technical recommendations to developers.  By using such tools, it is possible that engineers could produce inaccurate code documentation, code that doesn’t follow secure development practices, or reveal system information beyond what companies would typically share. Wall Street Journal 

Attackers are using artificial intelligence to write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal. Washington Post 

Doctored photos are "a nifty way to plant false memories" and "things are going to get even worse with deep fake technology," psychologist Elizabeth Loftus said at the Nobel Prize Summit last month that focused on misinformation. Axios

In a world where talent is as scarce and coveted as it is in AI right now, it’s hard for the government and government-funded entities to compete. And it makes starting a venture capital-funded company to do advanced safety research seem reasonable, compared to trying to set up a government agency to do the same. There’s more money and there’s better pay; you’ll likely get more high-quality staff. Vox

“It’s possible that super-intelligent A.I. is a looming threat, or that we might one day soon accidentally trap a self-aware entity inside a computer—but if such a system does emerge, it won’t be in the form of a large language model.” New Yorker 

AI will be at the center of future financial crises — and regulators are not going to be able to stay ahead of it. That's the message being sent by SEC chair Gary Gensler, arguably the most important and powerful regulator in the U.S. at the moment. Axios

The challenge with generative AI is that the technology is developing so quickly that companies are rushing to figure out if it introduces new cybersecurity challenges or magnifies existing security weaknesses. Meanwhile, technology vendors have inundated businesses with new generative AI-based features and offerings—not all of which they need or have even paid for. Wall Street Journal

An estimated 3,200 hackers will try their hand at tricking chatbots and image generators, in the hopes of exposing vulnerabilities. “We’re trying something very wild and audacious, and we’re hopeful it works out.” Semafor

Researchers have found an AI-driven attack that can steal passwords with up to 95% accuracy by listening to what you type on your keyboard. Metro