Deepfakes are now mimicking heartbeats In a nutshell Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats. The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. This challenges … Read More “Another Move in the Deepfake Creation/Detection Arms Race” »
Category: AI
Auto Added by WPeMatico
This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear … Read More “Applying Security Engineering to Prompt Injection Security” »
Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats … Read More “Regulating AI Behavior with a Hypervisor” »
As AI coding assistants invent nonexistent software libraries to download and use, enterprising attackers create and upload libraries with those names—laced with malware, of course. Powered by WPeMatico
Microsoft is reporting that its AI systems are able to find new vulnerabilities in source code: Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and … Read More “AI Vulnerability Finding” »
This is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private … Read More “AIs as Trusted Third Parties” »
Cloudflare has a new feature—available to free users as well—that uses AI to generate random pages to feed to AI web crawlers: Instead of simply blocking bots, Cloudflare’s new system lures them into a “maze” of realistic-looking but irrelevant pages, wasting the crawler’s computing resources. The approach is a notable shift from the standard block-and-defend … Read More “AI Data Poisoning” »
The Atlantic has a search tool that allows you to search for specific works in the “LibGen” database of copyrighted works that Meta used to train its AI models. (The rest of the article is behind a paywall, but not the search tool.) It’s impossible to know exactly which parts of LibGen Meta used to … Read More “My Writings Are in the LibGen AI Training Corpus” »
Former CISA Director Jen Easterly writes about a new international intelligence sharing co-op: Historically, China, Russia, Iran & North Korea have cooperated to some extent on military and intelligence matters, but differences in language, culture, politics & technological sophistication have hindered deeper collaboration, including in cyber. Shifting geopolitical dynamics, however, could drive these states toward … Read More “China, Russia, Iran, and North Korea Intelligence Sharing” »
This is a sad story of someone who downloaded a Trojaned AI tool that resulted in hackers taking over his computer and, ultimately, costing him his job. Powered by WPeMatico