This is a nice piece of research: “Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents“.: Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities with security implications. While prior work has examined prompt-based attacks (e.g., prompt injection) and data-oriented threats (e.g., data … Read More “Time-of-Check Time-of-Use Attacks Against LLMs” »
Category: academic papers
Auto Added by WPeMatico
Research: Nondestructive detection of multiple dried squid qualities by hyperspectral imaging combined with 1D-KAN-CNN Abstract: Given that dried squid is a highly regarded marine product in Oriental countries, the global food industry requires a swift and noninvasive quality assessment of this product. The current study therefore uses visiblenear-infrared (VIS-NIR) hyperspectral imaging and deep learning (DL) … Read More “Assessing the Quality of Dried Squid” »
A couple of months ago, a new paper demonstrated some new attacks against the Fiat-Shamir transformation. Quanta published a good article that explains the results. This is a pretty exciting paper from a theoretical perspective, but I don’t see it leading to any practical real-world cryptanalysis. The fact that there are some weird circumstances that … Read More “New Cryptanalysis of the Fiat-Shamir Protocol” »
New research (paywalled): Editor’s summary: Cephalopods are one of the most successful marine invertebrates in modern oceans, and they have a 500-million-year-old history. However, we know very little about their evolution because soft-bodied animals rarely fossilize. Ikegami et al. developed an approach to reveal squid fossils, focusing on their beaks, the sole hard component of … Read More “Friday Squid Blogging: The Origin and Propagation of Squid” »
Interesting experiment: To design their experiment, the University of Pennsylvania researchers tested 2024’s GPT-4o-mini model on two requests that it should ideally refuse: calling the user a jerk and giving directions for how to synthesize lidocaine. The researchers created experimental prompts for both requests using each of seven different persuasion techniques (examples of which are … Read More “GPT-4o-mini Falls for Psychological Manipulation” »
Really good research on practical attacks against LLM agents. “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous” Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware—maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these … Read More “Indirect Prompt Injection Attacks Against LLM Assistants” »
In this input integrity attack against an AI system, researchers were able to fool AIOps tools: AIOps refers to the use of LLM-based agents to gather and analyze application telemetry, including system logs, performance metrics, traces, and alerts, to detect problems and then suggest or carry out corrective actions. The likes of Cisco have deployed … Read More “Subverting AIOps Systems Through Poisoned Input Data” »
Researchers have managed to eavesdrop on cell phone voice conversations by using radar to detect vibrations. It’s more a proof of concept than anything else. The radar detector is only ten feet away, the setup is stylized, and accuracy is poor. But it’s a start. Powered by WPeMatico
Peter Gutmann and Stephan Neuhaus have a new paper—I think it’s new, even though it has a March 2025 date—that makes the argument that we shouldn’t trust any of the quantum factorization benchmarks, because everyone has been cooking the books: Similarly, quantum factorisation is performed using sleight-of-hand numbers that have been selected to make them … Read More “Cheating on Quantum Computing Benchmarks” »
Bluesky thread. Here’s the paper, from 1957. Note reference 3. Powered by WPeMatico