SSL and internet security news

Monthly Archive: July 2020

Friday Squid Blogging: Squid Proteins for a Better Face Mask

Researchers are synthesizing squid proteins to create a face mask that better survives cleaning. (And you thought there was no connection between squid and COVID-19.) The military thinks this might have applications for self-healing robots.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Powered by WPeMatico

Fake Stories in Real News Sites

Fireeye is reporting that a hacking group called Ghostwriter broke into the content management systems of Eastern European news sites to plant fake stories.

From a Wired story:

The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they’ve posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content.

That hacking campaign, targeting media sites from Poland to Lithuania, has spread false stories about US military aggression, NATO soldiers spreading coronavirus, NATO planning a full-on invasion of Belarus, and more.

Powered by WPeMatico

Survey of Supply Chain Attacks

The Atlantic Council has a released a report that looks at the history of computer supply chain attacks.

Key trends from their summary:

  1. Deep Impact from State Actors: There were at least 27 different state attacks against the software supply chain including from Russia, China, North Korea, and Iran as well as India, Egypt, the United States, and Vietnam.States have targeted software supply chains with great effect as the majority of cases surveyed here did, or could have, resulted in remote code execution. Examples: CCleaner, NotPetya, Kingslayer, SimDisk, and ShadowPad.

  2. Abusing Trust in Code Signing: These attacks undermine public key cryptography and certificates used to ensure the integrity of code. Overcoming these protections is a critical step to enabling everything from simple alterations of open-source code to complex nation-state espionage campaigns. Examples: ShadowHammer, Naid/McRAT, and BlackEnergy 3.

  3. Hijacking Software Updates: 27% of these attacks targeted software updates to insert malicious code against sometimes millions of targets. These attacks are generally carried out by extremely capable actors and poison updates from legitimate vendors. Examples: Flame, CCleaner 1 & 2, NotPetya, and Adobe pwdum7v71.

  4. Poisoning Open-Source Code: These incidents saw attackers either modify open-source code by gaining account access or post their own packages with names similar to common examples. Attacks targeted some of the most widely used open source tools on the internet. Examples: Cdorked/Darkleech, RubyGems Backdoor, Colourama, and JavaScript 2018 Backdoor.

  5. Targeting App Stores: 22% of these attacks targeted app stores like the Google Play Store, Apple’s App Store, and other third-party app hubs to spread malware to mobile devices. Some attacks even targeted developer tools ­ meaning every app later built using that tool was potentially compromised. Examples: ExpensiveWall, BankBot, Gooligan, Sandworm’s Android attack, and XcodeGhost.

Recommendations included in the report. The entirely open and freely available dataset is here.

Powered by WPeMatico

Friday Squid Blogging: Introducing the Seattle Kraken

The Kraken is the name of Seattle’s new NFL franchise.

I have always really liked collective nouns as sports team names (like the Utah Jazz or the Minnesota Wild), mostly because it’s hard to describe individual players.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Powered by WPeMatico

Update on NIST’s Post-Quantum Cryptography Program

NIST has posted an update on their post-quantum cryptography program:

After spending more than three years examining new approaches to encryption and data protection that could defeat an assault from a quantum computer, the National Institute of Standards and Technology (NIST) has winnowed the 69 submissions it initially received down to a final group of 15. NIST has now begun the third round of public review. This “selection round” will help the agency decide on the small subset of these algorithms that will form the core of the first post-quantum cryptography standard.

[…]

For this third round, the organizers have taken the novel step of dividing the remaining candidate algorithms into two groups they call tracks. The first track contains the seven algorithms that appear to have the most promise.

“We’re calling these seven the finalists,” Moody said. “For the most part, they’re general-purpose algorithms that we think could find wide application and be ready to go after the third round.”

The eight alternate algorithms in the second track are those that either might need more time to mature or are tailored to more specific applications. The review process will continue after the third round ends, and eventually some of these second-track candidates could become part of the standard. Because all of the candidates still in play are essentially survivors from the initial group of submissions from 2016, there will also be future consideration of more recently developed ideas, Moody said.

“The likely outcome is that at the end of this third round, we will standardize one or two algorithms for encryption and key establishment, and one or two others for digital signatures,” he said. “But by the time we are finished, the review process will have been going on for five or six years, and someone may have had a good idea in the interim. So we’ll find a way to look at newer approaches too.”

Details are here. This is all excellent work, and exemplifies NIST at its best. The quantum-resistant algorithms will be standardized far in advance of any practical quantum computer, which is how we all want this sort of thing to go.

Powered by WPeMatico

Adversarial Machine Learning and the CFAA

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, “What are the potential legal risks to adversarial ML researchers when they attack ML systems?” Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

Powered by WPeMatico

Fawkes: Digital Image Cloaking

Fawkes is a system for manipulating digital images so that they aren’t recognized by facial recognition systems.

At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

Research paper.

Powered by WPeMatico