SSL and internet security news

Monthly Archive: April 2020

Securing Internet Videoconferencing Apps: Zoom and Others

The NSA just published a survey of video conferencing apps. So did Mozilla.

Zoom is on the good list, with some caveats. The company has done a lot of work addressing previous security concerns. It still has a bit to go on end-to-end encryption. Matthew Green looked at this. Zoom does offer end-to-end encryption if 1) everyone is using a Zoom app, and not logging in to the meeting using a webpage, and 2) the meeting is not being recorded in the cloud. That’s pretty good, but the real worry is where the encryption keys are generated and stored. According to Citizen Lab, the company generates them.

The Zoom transport protocol adds Zoom’s own encryption scheme to RTP in an unusual way. By default, all participants’ audio and video in a Zoom meeting appears to be encrypted and decrypted with a single AES-128 key shared amongst the participants. The AES key appears to be generated and distributed to the meeting’s participants by Zoom servers. Zoom’s encryption and decryption use AES in ECB mode, which is well-understood to be a bad idea, because this mode of encryption preserves patterns in the input.

The algorithm part was just fixed:

AES 256-bit GCM encryption: Zoom is upgrading to the AES 256-bit GCM encryption standard, which offers increased protection of your meeting data in transit and resistance against tampering. This provides confidentiality and integrity assurances on your Zoom Meeting, Zoom Video Webinar, and Zoom Phone data. Zoom 5.0, which is slated for release within the week, supports GCM encryption, and this standard will take effect once all accounts are enabled with GCM. System-wide account enablement will take place on May 30.

There is nothing in Zoom’s latest announcement about key management. So: while the company has done a really good job improving the security and privacy of their platform, there seems to be just one step remaining to fully encrypt the sessions.

The other thing I want Zoom to do is to make the security options necessary to prevent Zoombombing to be made available to users of the free version of that platform. Forcing users to pay for security isn’t a viable option right now.

Finally — I use Zoom all the time. I finished my Harvard class using Zoom; it’s the university standard. I am having Inrupt company meetings on Zoom. I am having professional and personal conferences on Zoom. It’s what everyone has, and the features are really good.

Powered by WPeMatico

How Did Facebook Beat a Federal Wiretap Demand?

This is interesting:

Facebook Inc. in 2018 beat back federal prosecutors seeking to wiretap its encrypted Messenger app. Now the American Civil Liberties Union is seeking to find out how.

The entire proceeding was confidential, with only the result leaking to the press. Lawyers for the ACLU and the Washington Post on Tuesday asked a San Francisco-based federal court of appeals to unseal the judge’s decision, arguing the public has a right to know how the law is being applied, particularly in the area of privacy.

[…]

The Facebook case stems from a federal investigation of members of the violent MS-13 criminal gang. Prosecutors tried to hold Facebook in contempt after the company refused to help investigators wiretap its Messenger app, but the judge ruled against them. If the decision is unsealed, other tech companies will likely try to use its reasoning to ward off similar government requests in the future.

Here’s the 2018 story. Slashdot thread.

Powered by WPeMatico

Fooling NLP Systems Through Word Swapping

MIT researchers have built a system that fools natural-language processing systems by swapping words with synonyms:

The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence “The characters, cast in impossibly contrived situations, are totally estranged from reality” to “The characters, cast in impossibly engineered circumstances, are fully estranged from reality” makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.

The results of this adversarial machine learning attack are impressive:

For example, Google’s powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.

The paper:

Abstract: Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective — it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving — it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient — it generates adversarial text with computational complexity linear to the text length.

Powered by WPeMatico

Automatic Instacart Bots

Instacart is taking legal action against bots that automatically place orders:

Before it closed, to use Cartdash users first selected what items they want from Instacart as normal. Once that was done, they had to provide Cartdash with their Instacart email address, password, mobile number, tip amount, and whether they prefer the first available delivery slot or are more flexible. The tool then checked that their login credentials were correct, logged in, and refreshed the checkout page over and over again until a new delivery window appeared. It then placed the order, Koch explained.

I think I am writing a new book about hacking in general, and want to discuss this. First, does this count as a hack? I feel like it is, since it’s a way to subvert the Instacart ordering system.

When asked if this tool may give people an unfair advantage over those who don’t use the tool, Koch said, “at this point, it’s a matter of awareness, not technical ability, since people who can use Instacart can use Cartdash.” When pushed on how, realistically, not every user of Instacart is going to know about Cartdash, even after it may receive more attention, and the people using Cartdash will still have an advantage over people who aren’t using automated tools, Koch again said, “it’s a matter of awareness, not technical ability.”

Second, should Instacart take action against this? On the one hand, it isn’t “fair” in that Cartdash users get an advantage in finding a delivery slot. But it’s not really any different than programs that “snipe” on eBay and other bidding platforms.

Third, does Instacart even stand a chance in the long run. As various AI technologies give us more agents and bots, this is going to increasingly become the new normal. I think we need to figure out a fair allocation mechanism that doesn’t rely on the precise timing of submissions.

Powered by WPeMatico

Friday Squid Blogging: Humboldt Squid Backlight Themselves to Communicate More Clearly

This is neat:

Deep in the Pacific Ocean, six-foot-long Humboldt squid are known for being aggressive, cannibalistic and, according to new research, good communicators.

Known as “red devils,” the squid can rapidly change the color of their skin, making different patterns to communicate, something other squid species are known to do.

But Humboldt squid live in almost total darkness more than 1,000 feet below the surface, so their patterns aren’t very visible. Instead, according to a new study, they create backlighting for the patterns by making their bodies glow, like the screen of an e-reader.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Powered by WPeMatico

Global Surveillance in the Wake of COVID-19

OneZero is tracking thirty countries around the world who are implementing surveillance programs in the wake of COVID-19:

The most common form of surveillance implemented to battle the pandemic is the use of smartphone location data, which can track population-level movement down to enforcing individual quarantines. Some governments are making apps that offer coronavirus health information, while also sharing location information with authorities for a period of time. For instance, in early March, the Iranian government released an app that it pitched as a self-diagnostic tool. While the tool’s efficacy was likely low, given reports of asymptomatic carriers of the virus, the app saved location data of millions of Iranians, according to a Vice report.

One of the most alarming measures being implemented is in Argentina, where those who are caught breaking quarantine are being forced to download an app that tracks their location. In Hong Kong, those arriving in the airport are given electronic tracking bracelets that must be synced to their home location through their smartphone’s GPS signal.

Powered by WPeMatico

Chinese COVID-19 Disinformation Campaign

The New York Times is reporting on state-sponsored disinformation campaigns coming out of China:

Since that wave of panic, United States intelligence agencies have assessed that Chinese operatives helped push the messages across platforms, according to six American officials, who spoke on the condition of anonymity to publicly discuss intelligence matters. The amplification techniques are alarming to officials because the disinformation showed up as texts on many Americans’ cellphones, a tactic that several of the officials said they had not seen before.

Powered by WPeMatico

New iPhone Zero-Day Discovered

Last year, ZecOps discovered two iPhone zero-day exploits. They will be patched in the next iOS release:

Avraham declined to disclose many details about who the targets were, and did not say whether they lost any data as a result of the attacks, but said “we were a bit surprised about who was targeted.” He said some of the targets were an executive from a telephone carrier in Japan, a “VIP” from Germany, managed security service providers from Saudi Arabia and Israel, people who work for a Fortune 500 company in North America, and an executive from a Swiss company.

[…]

On the other hand, this is not as polished a hack as others, as it relies on sending an oversized email, which may get blocked by certain email providers. Moreover, Avraham said it only works on the default Apple Mail app, and not on Gmail or Outlook, for example.

Powered by WPeMatico

Another Story of Bad 1970s Encryption

This one is from the Netherlands. It seems to be clever cryptanalysis rather than a backdoor.

The Dutch intelligence service has been able to read encrypted communications from dozens of countries since the late 1970s thanks to a microchip, according to research by de Volkskrant on Thursday. The Netherlands could eavesdrop on confidential communication from countries such as Iran, Egypt and Saudi Arabia.

Philips, together with Siemens, built an encryption machine in the late 1970s. The device, the Aroflex, was used for secret communication between NATO allies. In addition, the companies also wanted to market the T1000CA, a commercial variant of the Aroflex with less strong cryptography.

The Volkskrant investigation shows that the Ministry of Foreign Affairs and the Marine Intelligence Service (MARID) cracked the cryptography of this device before it was launched. Philips helped the ministry and the intelligence service.

Normally it would take at least a month and a half to crack the T1000CA encryption. “Too long to get useful information from intercepted communication,” the newspaper writes. But MARID employees, together with Philips, succeeded in accelerating this 2.500 times by developing a special microchip.

The T1000CA was then sold to numerous non-NATO countries, including the Middle East and Asia. These countries could then be overheard by the Dutch intelligence services for years.

The 1970s was a decade of really bad commercial cryptography. DES, in 1975, was an improvement with its 56-bit key. I’m sure there are lots of these stories.

Here’s more about the Aroflex. And here’s what I think is the original Dutch story.

Powered by WPeMatico

Vulnerability Finding Using Machine Learning

Microsoft is training a machine-learning system to find software bugs:

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time.

News article.

I wrote about this in 2018:

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic — and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

Powered by WPeMatico