Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an … Read More “Digital Threat Modeling Under Authoritarianism” »
Category: threat models
Auto Added by WPeMatico
Really good research on practical attacks against LLM agents. “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous” Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware—maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these … Read More “Indirect Prompt Injection Attacks Against LLM Assistants” »
Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats … Read More “Regulating AI Behavior with a Hypervisor” »
Interesting social engineering attack: luring potential job applicants with fake recruiting pitches, trying to convince them to download malware. From a news article These particular attacks from North Korean state-funded hacking team Lazarus Group are new, but the overall malware campaign against the Python development community has been running since at least August of 2023, … Read More “Python Developers Targeted with Malware During Fake Job Interviews” »
New research into poisoning AI models: The researchers first trained the AI models using supervised learning and then used additional “safety training” methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts, the AI could still generate exploitable … Read More “Poisoning AI Models” »
A group of Swiss researchers have published an impressive security analysis of Threema. We provide an extensive cryptographic analysis of Threema, a Swiss-based encrypted messaging application with more than 10 million users and 7000 corporate customers. We present seven different attacks against the protocol in three different threat models. As one example, we present a … Read More “Security Analysis of Threema” »
With the release of ChatGPT, I’ve read many random articles about this or that threat from the technology. This paper is a good survey of the field: what the threats are, how we might detect machine-generated text, directions for future research. It’s a solid grounding amongst all of the hype. Machine Generated Text: A Comprehensive … Read More “Threats of Machine-Generated Text” »
Mandiant is reporting on a new botnet. The group, which security firm Mandiant is calling UNC3524, has spent the past 18 months burrowing into victims’ networks with unusual stealth. In cases where the group is ejected, it wastes no time reinfecting the victim environment and picking up where things left off. There are many keys … Read More “New Sophisticated Malware” »
This is part 3 of Sean Gallagher’s advice for “securing your digital life.” Powered by WPeMatico
ArsTechnica’s Sean Gallagher has a two–part article on “securing your digital life.” It’s pretty good. Powered by WPeMatico