Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the … Read More “LLMs and Phishing” »
Category: machine learning
Auto Added by WPeMatico
CRYSTALS-Kyber is one of the public-key algorithms currently recommended by NIST as part of its post-quantum cryptography standardization process. Researchers have just published a side-channel attack—using power consumption—against an implementation of the algorithm that was supposed to be resistant against that sort of attack. The algorithm is not “broken” or “cracked”—despite headlines to the contrary—this … Read More “Side-Channel Attack against CRYSTALS-Kyber” »
This is really interesting research from a few months ago: Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible … Read More “Putting Undetectable Backdoors in Machine Learning Models” »
The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It’s a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In … Read More “Attacking Machine Learning Systems” »
With the release of ChatGPT, I’ve read many random articles about this or that threat from the technology. This paper is a good survey of the field: what the threats are, how we might detect machine-generated text, directions for future research. It’s a solid grounding amongst all of the hype. Machine Generated Text: A Comprehensive … Read More “Threats of Machine-Generated Text” »
Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the next. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.” Abstract: We investigate a new threat to neural … Read More “Adversarial ML Attack that Secretly Gives a Language Model a Point of View” »
Researchers have used thermal cameras and ML guessing techniques to recover passwords from measuring the residual heat left by fingers on keyboards. From the abstract: We detail the implementation of ThermoSecure and make a dataset of 1,500 thermal images of keyboards with heat traces resulting from input publicly available. Our first study shows that ThermoSecure … Read More “Recovering Passwords by Measuring Residual Heat” »
Interesting research: “ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks, by Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, and Robert Mullins: Abstract: Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove … Read More “Inserting a Backdoor into a Machine-Learning System” »
Interesting research: “Sponge Examples: Energy-Latency Attacks on Neural Networks“: Abstract: The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers’ focus so far … Read More “Attacking the Performance of Machine Learning Systems” »
Yet another adversarial ML attack: Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order. So what happens if the bad guys can cause the order to be not random? You guessed … Read More “Manipulating Machine-Learning Systems through the Order of the Training Data” »