SSL and internet security news

securityengineering

Auto Added by WPeMatico

The Myth of Consumer-Grade Security

The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that’s not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.

In his keynote address at the International Conference on Cybersecurity, Attorney General William Barr argued that companies should weaken encryption systems to gain access to consumer devices for criminal investigations. Barr repeated a common fallacy about a difference between military-grade encryption and consumer encryption: “After all, we are not talking about protecting the nation’s nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications.”

The thing is, that distinction between military and consumer products largely doesn’t exist. All of those “consumer products” Barr wants access to are used by government officials — heads of state, legislators, judges, military commanders and everyone else — worldwide. They’re used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They’re critical to national security as well as personal security.

This wasn’t true during much of the Cold War. Before the Internet revolution, military-grade electronics were different from consumer-grade. Military contracts drove innovation in many areas, and those sectors got the cool new stuff first. That started to change in the 1980s, when consumer electronics started to become the place where innovation happened. The military responded by creating a category of military hardware called COTS: commercial off-the-shelf technology. More consumer products became approved for military applications. Today, pretty much everything that doesn’t have to be hardened for battle is COTS and is the exact same product purchased by consumers. And a lot of battle-hardened technologies are the same computer hardware and software products as the commercial items, but in sturdier packaging.

Through the mid-1990s, there was a difference between military-grade encryption and consumer-grade encryption. Laws regulated encryption as a munition and limited what could legally be exported only to key lengths that were easily breakable. That changed with the rise of Internet commerce, because the needs of commercial applications more closely mirrored the needs of the military. Today, the predominant encryption algorithm for commercial applications — Advanced Encryption Standard (AES) — is approved by the National Security Agency (NSA) to secure information up to the level of Top Secret. The Department of Defense’s classified analogs of the Internet­ — Secret Internet Protocol Router Network (SIPRNet), Joint Worldwide Intelligence Communications System (JWICS) and probably others whose names aren’t yet public — use the same Internet protocols, software, and hardware that the rest of the world does, albeit with additional physical controls. And the NSA routinely assists in securing business and consumer systems, including helping Google defend itself from Chinese hackers in 2010.

Yes, there are some military applications that are different. The US nuclear system Barr mentions is one such example — and it uses ancient computers and 8-inch floppy drives. But for pretty much everything that doesn’t see active combat, it’s modern laptops, iPhones, the same Internet everyone else uses, and the same cloud services.

This is also true for corporate applications. Corporations rarely use customized encryption to protect their operations. They also use the same types of computers, networks, and cloud services that the government and consumers use. Customized security is both more expensive because it is unique, and less secure because it’s nonstandard and untested.

During the Cold War, the NSA had the dual mission of attacking Soviet computers and communications systems and defending domestic counterparts. It was possible to do both simultaneously only because the two systems were different at every level. Today, the entire world uses Internet protocols; iPhones and Android phones; and iMessage, WhatsApp and Signal to secure their chats. Consumer-grade encryption is the same as military-grade encryption, and consumer security is the same as national security.

Barr can’t weaken consumer systems without also weakening commercial, government, and military systems. There’s one world, one network, and one answer. As a matter of policy, the nation has to decide which takes precedence: offense or defense. If security is deliberately weakened, it will be weakened for everybody. And if security is strengthened, it is strengthened for everybody. It’s time to accept the fact that these systems are too critical to society to weaken. Everyone will be more secure with stronger encryption, even if it means the bad guys get to use that encryption as well.

This essay previously appeared on Lawfare.com.

Powered by WPeMatico

Supply-Chain Attack against the Electron Development Platform

Electron is a cross-platform development system for many popular communications apps, including Skype, Slack, and WhatsApp. Security vulnerabilities in the update system allows someone to silently inject malicious code into applications. From a news article:

At the BSides LV security conference on Tuesday, Pavel Tsakalidis demonstrated a tool he created called BEEMKA, a Python-based tool that allows someone to unpack Electron ASAR archive files and inject new code into Electron’s JavaScript libraries and built-in Chrome browser extensions. The vulnerability is not part of the applications themselves but of the underlying Electron framework — ­and that vulnerability allows malicious activities to be hidden within processes that appear to be benign. Tsakalidis said that he had contacted Electron about the vulnerability but that he had gotten no response — ­and the vulnerability remains.

While making these changes required administrator access on Linux and MacOS, it only requires local access on Windows. Those modifications can create new event-based “features” that can access the file system, activate a Web cam, and exfiltrate information from systems using the functionality of trusted applications­ — including user credentials and sensitive data. In his demonstration, Tsakalidis showed a backdoored version of Microsoft Visual Studio Code that sent the contents of every code tab opened to a remote website.

Basically, the Electron ASAR files aren’t signed or encrypted, so modifying them is easy.

Note that this attack requires local access to the computer, which means that an attacker that could do this could do much more damaging things as well. But once an app has been modified, it can be distributed to other users. It’s not a big deal attack, but it’s a vulnerability that should be closed.

Powered by WPeMatico

Wanted: Cybersecurity Imagery

Eli Sugarman of the Hewlettt Foundation laments about the sorry state of cybersecurity imagery:

The state of cybersecurity imagery is, in a word, abysmal. A simple Google Image search for the term proves the point: It’s all white men in hoodies hovering menacingly over keyboards, green “Matrix”-style 1s and 0s, glowing locks and server racks, or some random combination of those elements — sometimes the hoodie-clad men even wear burglar masks. Each of these images fails to convey anything about either the importance or the complexity of the topic­ — or the huge stakes for governments, industry and ordinary people alike inherent in topics like encryption, surveillance and cyber conflict.

I agree that this is a problem. It’s not something I noticed until recently. I work in words. I think in words. I don’t use PowerPoint (or anything similar) when I give presentations. I don’t need visuals.

But recently, I started teaching at the Harvard Kennedy School, and I constantly use visuals in my class. I made those same image searches, and I came up with similarly unacceptable results.

But unlike me, Hewlett is doing something about it. You can help: participate in the Cybersecurity Visuals Challenge.

Powered by WPeMatico

Insider Logic Bombs

Add to the “not very smart criminals” file:

According to court documents, Tinley provided software services for Siemens’ Monroeville, PA offices for nearly ten years. Among the work he was asked to perform was the creation of spreadsheets that the company was using to manage equipment orders.

The spreadsheets included custom scripts that would update the content of the file based on current orders stored in other, remote documents, allowing the company to automate inventory and order management.

But while Tinley’s files worked for years, they started malfunctioning around 2014. According to court documents, Tinley planted so-called “logic bombs” that would trigger after a certain date, and crash the files.

Every time the scripts would crash, Siemens would call Tinley, who’d fix the files for a fee.

Powered by WPeMatico

Software Developers and Security

According to a survey: “68% of the security professionals surveyed believe it’s a programmer’s job to write secure code, but they also think less than half of developers can spot security holes.” And that’s a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get developers to make remediation of vulnerabilities a priority. Worse still, 68% of security professionals feel fewer than half of developers can spot security vulnerabilities later in the life cycle. Roughly half of security professionals said they most often found bugs after code is merged in a test environment.

At the same time, nearly 70% of developers said that while they are expected to write secure code, they get little guidance or help. One disgruntled programmer said, “It’s a mess, no standardization, most of my work has never had a security scan.”

Another problem is it seems many companies don’t take security seriously enough. Nearly 44% of those surveyed reported that they’re not judged on their security vulnerabilities.

Powered by WPeMatico

Yubico Security Keys with a Crypto Flaw

Wow, is this an embarrassing bug:

Yubico is recalling a line of security keys used by the U.S. government due to a firmware flaw. The company issued a security advisory today that warned of an issue in YubiKey FIPS Series devices with firmware versions 4.4.2 and 4.4.4 that reduced the randomness of the cryptographic keys it generates. The security keys are used by thousands of federal employees on a daily basis, letting them securely log-on to their devices by issuing one-time passwords.

The problem in question occurs after the security key powers up. According to Yubico, a bug keeps “some predictable content” inside the device’s data buffer that could impact the randomness of the keys generated. Security keys with ECDSA signatures are in particular danger. A total of 80 of the 256 bits generated by the key remain static, meaning an attacker who gains access to several signatures could recreate the private key.

Boing Boing post.

Powered by WPeMatico

Hacking Hardware Security Modules

Security researchers Gabriel Campana and Jean-Baptiste Bédrune are giving a hardware security module (HSM) talk at BlackHat in August:

This highly technical presentation targets an HSM manufactured by a vendor whose solutions are usually found in major banks and large cloud service providers. It will demonstrate several attack paths, some of them allowing unauthenticated attackers to take full control of the HSM. The presented attacks allow retrieving all HSM secrets remotely, including cryptographic keys and administrator credentials. Finally, we exploit a cryptographic bug in the firmware signature verification to upload a modified firmware to the HSM. This firmware includes a persistent backdoor that survives a firmware update.

They have an academic paper in French, and a presentation of the work. Here’s a summary in English.

There were plenty of technical challenges to solve along the way, in what was clearly a thorough and professional piece of vulnerability research:

  1. They started by using legitimate SDK access to their test HSM to upload a firmware module that would give them a shell inside the HSM. Note that this SDK access was used to discover the attacks, but is not necessary to exploit them.

  2. They then used the shell to run a fuzzer on the internal implementation of PKCS#11 commands to find reliable, exploitable buffer overflows.

  3. They checked they could exploit these buffer overflows from outside the HSM, i.e. by just calling the PKCS#11 driver from the host machine

  4. They then wrote a payload that would override access control and, via another issue in the HSM, allow them to upload arbitrary (unsigned) firmware. It’s important to note that this backdoor is persistent ­ a subsequent update will not fix it.

  5. They then wrote a module that would dump all the HSM secrets, and uploaded it to the HSM.

Powered by WPeMatico

Chinese Military Wants to Develop Custom OS

Citing security concerns, the Chinese military wants to replace Windows with its own custom operating system:

Thanks to the Snowden, Shadow Brokers, and Vault7 leaks, Beijing officials are well aware of the US’ hefty arsenal of hacking tools, available for anything from smart TVs to Linux servers, and from routers to common desktop operating systems, such as Windows and Mac.

Since these leaks have revealed that the US can hack into almost anything, the Chinese government’s plan is to adopt a “security by obscurity” approach and run a custom operating system that will make it harder for foreign threat actors — mainly the US — to spy on Chinese military operations.

It’s unclear exactly how custom this new OS will be. It could be a Linux variant, like North Korea’s Red Star OS. Or it could be something completely new. Normally, I would be highly skeptical of a country being able to write and field its own custom operating system, but China is one of the few that is large enough to actually be able to do it. So I’m just moderately skeptical.

Powered by WPeMatico

Programmers Who Don’t Understand Security Are Poor at Security

A university study confirmed the obvious: if you pay a random bunch of freelance programmers a small amount of money to write security software, they’re not going to do a very good job at it.

In an experiment that involved 43 programmers hired via the Freelancer.com platform, University of Bonn academics have discovered that developers tend to take the easy way out and write code that stores user passwords in an unsafe manner.

For their study, the German academics asked a group of 260 Java programmers to write a user registration system for a fake social network.

Of the 260 developers, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL to create the user registration component.

Of the 43, academics paid half of the group with €100, and the other half with €200, to determine if higher pay made a difference in the implementation of password security features.

Further, they divided the developer group a second time, prompting half of the developers to store passwords in a secure manner, and leaving the other half to store passwords in their preferred method — hence forming four quarters of developers paid €100 and prompted to use a secure password storage method (P100), developers paid €200 and prompted to use a secure password storage method (P200), devs paid €100 but not prompted for password security (N100), and those paid €200 but not prompted for password security (N200).

I don’t know why anyone would expect this group of people to implement a good secure password system. I’m sure they grabbed the first thing they found on GitHub that did the job.

Powered by WPeMatico

On the Security of Password Managers

There’s new research on the security of password managers, specifically 1Password, Dashlane, KeePass, and Lastpass. This work specifically looks at password leakage on the host computer. That is, does the password manager accidentally leave plaintext copies of the password lying around memory?

All password managers we examined sufficiently secured user secrets while in a “not running” state. That is, if a password database were to be extracted from disk and if a strong master password was used, then brute forcing of a password manager would be computationally prohibitive.

Each password manager also attempted to scrub secrets from memory. But residual buffers remained that contained secrets, most likely due to memory leaks, lost memory references, or complex GUI frameworks which do not expose internal memory management mechanisms to sanitize secrets.

This was most evident in 1Password7 where secrets, including the master password and its associated secret key, were present in both a locked and unlocked state. This is in contrast to 1Password4, where at most, a single entry is exposed in a “running unlocked” state and the master password exists in memory in an obfuscated form, but is easily recoverable. If 1Password4 scrubbed the master password memory region upon successful unlocking, it would comply with all proposed security guarantees we outlined earlier.

This paper is not meant to criticize specific password manager implementations; however, it is to establish a reasonable minimum baseline which all password managers should comply with. It is evident that attempts are made to scrub and sensitive memory in all password managers. However, each password manager fails in implementing proper secrets sanitization for various reasons.

For example:

LastPass obfuscates the master password while users are typing in the entry, and when the password manager enters an unlocked state, database entries are only decrypted into memory when there is user interaction. However, ISE reported that these entries persist in memory after the software enters a locked state. It was also possible for the researchers to extract the master password and interacted-with password entries due to a memory leak.

KeePass scrubs the master password from memory and is not recoverable. However, errors in workflows permitted the researchers from extracting credential entries which have been interacted with. In the case of Windows APIs, sometimes, various memory buffers which contain decrypted entries may not be scrubbed correctly.

Whether this is a big deal or not depends on whether you consider your computer to be trusted.

Several people have emailed me to ask why my own Password Safe was not included in the evaluation, and whether it has the same vulnerabilities. My guess about the former is that Password Safe isn’t as popular as the others. (This is for two reasons: 1) I don’t publicize it very much, and 2) it doesn’t have an easy way to synchronize passwords across devices or otherwise store password data in the cloud.) As to the latter: we tried to code Password Safe not to leave plaintext passwords lying around in memory.

So, Independent Security Evaluators: take a look at Password Safe.

Also, remember the vulnerabilities found in many cloud-based password managers back in 2014?

News article. Slashdot thread.

Powered by WPeMatico