Ted Chiang has an excellent essay in the New Yorker: “Will A.I. Become the New McKinsey?” The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” … Read More “Ted Chiang on the Risks of AI” »
Category: artificial intelligence
Auto Added by WPeMatico
We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too. Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your … Read More “Building Trustworthy AI” »
At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will all open up their models for attack. The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications. Participants will be given laptops to use to attack … Read More “AI Hacking Village at DEF CON This Year” »
Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment … Read More “Large Language Models and Elections” »
Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic. Jim Dempsey, one of the workshop organizers, wrote a blog post on the report: As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs … Read More “Security Risks of AI” »
There’s good reason to fear that A.I. systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist. These risks … Read More “AI to Aid Democracy” »
I’m not sure there are good ways to build guardrails to prevent this sort of thing: There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 to develop novel bioweapons has raised alarm. … Read More “Using LLMs to Create Bioweapons” »
Motherboard is reporting on AI-generated voices being used for “swatting”: In fact, Motherboard has found, this synthesized call and another against Hempstead High School were just one small part of a months-long, nationwide campaign of dozens, and potentially hundreds, of threats made by one swatter in particular who has weaponized computer generated voices. Known as … Read More “Swatting as a Service” »
New research: “Achilles Heels for AGI/ASI via Decision Theoretic Adversaries“: As progress in AI continues to advance, it is important to know how advanced systems will make choices and in what ways they may fail. Machines can already outsmart humans in some domains, and understanding how to safely build ones which may have capabilities at … Read More “Research on AI in Adversarial Settings” »
A reporter used an AI synthesis of his own voice to fool the voice authentication system for Lloyd’s Bank. Powered by WPeMatico