SSL and internet security news

Monthly Archive: June 2018

Conservation of Threat

Here’s some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:

To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task ­– to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.

As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.

This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing “suspicious” activities, “see something say something” campaigns, and so on.

The academic paper.

Powered by WPeMatico

Manipulative Social Media Practices

The Norwegian Consumer Council just published an excellent report on the deceptive practices tech companies use to trick people into giving up their privacy.

From the executive summary:

Facebook and Google have privacy intrusive defaults, where users who want the privacy friendly option have to go through a significantly longer process. They even obscure some of these settings so that the user cannot know that the more privacy intrusive option was preselected.

The popups from Facebook, Google and Windows 10 have design, symbols and wording that nudge users away from the privacy friendly choices. Choices are worded to compel users to make certain choices, while key information is omitted or downplayed. None of them lets the user freely postpone decisions. Also, Facebook and Google threaten users with loss of functionality or deletion of the user account if the user does not choose the privacy intrusive option.

[…]

The combination of privacy intrusive defaults and the use of dark patterns, nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical. We question whether this is in accordance with the principles of data protection by default and data protection by design, and if consent given under these circumstances can be said to be explicit, informed and freely given.

I am a big fan of the Norwegian Consumer Council. They’ve published some excellent research.

Powered by WPeMatico

IEEE Statement on Strong Encryption vs. Backdoors

The IEEE came out in favor of strong encryption:

IEEE supports the use of unfettered strong encryption to protect confidentiality and integrity of data and communications. We oppose efforts by governments to restrict the use of strong encryption and/or to mandate exceptional access mechanisms such as “backdoors” or “key escrow schemes” in order to facilitate government access to encrypted data. Governments have legitimate law enforcement and national security interests. IEEE believes that mandating the intentional creation of backdoors or escrow schemes — no matter how well intentioned — does not serve those interests well and will lead to the creation of vulnerabilities that would result in unforeseen effects as well as some predictable negative consequences

The full statement is here.

Powered by WPeMatico

Bypassing Passcodes in iOS

Last week, a story was going around explaining how to brute-force an iOS password. Basically, the trick was to plug the phone into an external keyboard and trying every PIN at once:

We reported Friday on Hickey’s findings, which claimed to be able to send all combinations of a user’s possible passcode in one go, by enumerating each code from 0000 to 9999, and concatenating the results in one string with no spaces. He explained that because this doesn’t give the software any breaks, the keyboard input routine takes priority over the device’s data-erasing feature.

I didn’t write about it, because it seemed too good to be true. A few days later, Apple pushed back on the findings — and it seems that it doesn’t work.

This isn’t to say that no one can break into an iPhone. We know that companies like Cellebrite and Grayshift are renting/selling iPhone unlock tools to law enforcement — which means governments and criminals can do the same thing — and that Apple is releasing a new feature called “restricted mode” that may make those hacks obsolete.

Grayshift is claiming that its technology will still work.

Former Apple security engineer Braden Thomas, who now works for a company called Grayshift, warned customers who had bought his GrayKey iPhone unlocking tool that iOS 11.3 would make it a bit harder for cops to get evidence and data out of seized iPhones. A change in the beta didn’t break GrayKey, but would require cops to use GrayKey on phones within a week of them being last unlocked.

“Starting with iOS 11.3, iOS saves the last time a device has been unlocked (either with biometrics or passcode) or was connected to an accessory or computer. If a full seven days (168 hours) elapse [sic] since the last time iOS saved one of these events, the Lightning port is entirely disabled,” Thomas wrote in a blog post published in a customer-only portal, which Motherboard obtained. “You cannot use it to sync or to connect to accessories. It is basically just a charging port at this point. This is termed USB Restricted Mode and it affects all devices that support iOS 11.3.”

Whether that’s real or marketing, we don’t know.

Powered by WPeMatico

Secure Speculative Execution

We’re starting to see research into designing speculative execution systems that avoid Spectre- and Meltdown-like security problems. Here’s one.

I don’t know if this particular design secure. My guess is that we’re going to see several iterations of design and attack before we settle on something that works. But it’s good to see the research results emerge.

News article.

Powered by WPeMatico

The Effects of Iran’s Telegram Ban

The Center for Human Rights in Iran has released a report outlining the effect’s of that country’s ban on Telegram, a secure messaging app used by about half of the country.

The ban will disrupt the most important, uncensored platform for information and communication in Iran, one that is used extensively by activists, independent and citizen journalists, dissidents and international media. It will also impact electoral politics in Iran, as centrist, reformist and other relatively moderate political groups that are allowed to participate in Iran’s elections have been heavily and successfully using Telegram to promote their candidates and electoral lists during elections. State-controlled domestic apps and media will not provide these groups with such a platform, even as they continue to do so for conservative and hardline political forces in the country, significantly aiding the latter.

From a Wired article:

Researchers found that the ban has had broad effects, hindering and chilling individual speech, forcing political campaigns to turn to state-sponsored media tools, limiting journalists and activists, curtailing international interactions, and eroding businesses that grew their infrastructure and reach off of Telegram.

It’s interesting that the analysis doesn’t really center around the security properties of Telegram, but more around its ubiquity as a messaging platform in the country.

Powered by WPeMatico