Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of US tech companies? Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” … Read More “On the Need for an AI Public Option” »
Category: artificial intelligence
Auto Added by WPeMatico
New research suggests that AIs can produce perfectly secure steganographic images: Abstract: Steganography is the practice of encoding secret information into innocuous content in such a manner that an adversarial third party would not realize that there is hidden meaning. While this problem has classically been studied in security literature, recent advances in generative models … Read More “AI-Generated Steganography” »
In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it … Read More “Open-Source LLMs” »
Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. The press coverage has been extensive, and surprising to me. The New York Times headline … Read More “On the Catastrophic Risk of AI” »
Interesting essay on the poisoning of LLMs—ChatGPT in particular: Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they … Read More “On the Poisoning of LLMs” »
In case you don’t have enough to worry about, someone has built a credible handwriting machine: This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social … Read More “Credible Handwriting Machine” »
Ted Chiang has an excellent essay in the New Yorker: “Will A.I. Become the New McKinsey?” The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” … Read More “Ted Chiang on the Risks of AI” »
We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too. Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your … Read More “Building Trustworthy AI” »
At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will all open up their models for attack. The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications. Participants will be given laptops to use to attack … Read More “AI Hacking Village at DEF CON This Year” »
Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment … Read More “Large Language Models and Elections” »