In this input integrity attack against an AI system, researchers were able to fool AIOps tools: AIOps refers to the use of LLM-based agents to gather and analyze application telemetry, including system logs, performance metrics, traces, and alerts, to detect problems and then suggest or carry out corrective actions. The likes of Cisco have deployed … Read More “Subverting AIOps Systems Through Poisoned Input Data” »
Category: AI
Auto Added by WPeMatico
Researchers have managed to eavesdrop on cell phone voice conversations by using radar to detect vibrations. It’s more a proof of concept than anything else. The radar detector is only ten feet away, the setup is stylized, and accuracy is poor. But it’s a start. Powered by WPeMatico
Here’s an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a “break” to a “continue.” That turned an error logging statement into an infinite loop, which crashed the system. This … Read More “LLM Coding Integrity Breach” »
There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here’s where to register to attend, or participate, in the fourth. Some really great stuff here. Powered by WPeMatico
The government of China has accused Nvidia of inserting a backdoor into their H20 chips: China’s cyber regulator on Thursday said it had held a meeting with Nvidia over what it called “serious security issues” with the company’s artificial intelligence chips. It said US AI experts had “revealed that Nvidia’s computing chips have location tracking … Read More “China Accuses Nvidia of Putting Backdoors into Their Chips” »
Today’s freaky LLM behavior: We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a “student” model learns to prefer owls when trained on sequences of numbers generated by a “teacher” model that prefers owls. This same phenomenon can transmit misalignment … Read More “Subliminal Learning in AIs” »
We need to talk about data integrity. Narrowly, the term refers to ensuring that data isn’t tampered with, either in transit or in storage. Manipulating account balances in bank databases, removing entries from criminal records, and murder by removing notations about allergies from medical records are all integrity attacks. More broadly, integrity refers to ensuring … Read More “The Age of Integrity” »
Simon Willison talks about ChatGPT’s new memory dossier feature. In his explanation, he illustrates how much the LLM—and the company—knows about its users. It’s a big quote, but I want you to read it all. Here’s a prompt you can use to give you a solid idea of what’s in that summary. I first saw … Read More “What LLMs Know About Their Users” »
If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day. But the fact remains that AI already … Read More “Where AI Provides Value” »
This article gives a good rundown of the security risks of Windows Recall, and the repurposed copyright protection took that Signal used to block the AI feature from scraping Signal data. Powered by WPeMatico
