SSL and internet security news

laws

Auto Added by WPeMatico

The Problem with Treating Data as a Commodity

Excellent Brookings paper: “Why data ownership is the wrong approach to protecting privacy.”

From the introduction:

Treating data like it is property fails to recognize either the value that varieties of personal information serve or the abiding interest that individuals have in their personal information even if they choose to “sell” it. Data is not a commodity. It is information. Any system of information rights­ — whether patents, copyrights, and other intellectual property, or privacy rights — ­presents some tension with strong interest in the free flow of information that is reflected by the First Amendment. Our personal information is in demand precisely because it has value to others and to society across a myriad of uses.

From the conclusion:

Privacy legislation should empower individuals through more layered and meaningful transparency and individual rights to know, correct, and delete personal information in databases held by others. But relying entirely on individual control will not do enough to change a system that is failing individuals, and trying to reinforce control with a property interest is likely to fail society as well. Rather than trying to resolve whether personal information belongs to individuals or to the companies that collect it, a baseline federal privacy law should directly protect the abiding interest that individuals have in that information and also enable the social benefits that flow from sharing information.

Powered by WPeMatico

Another California Data Privacy Law

The California Consumer Privacy Act is a lesson in missed opportunities. It was passed in haste, to stop a ballot initiative that would have been even more restrictive:

In September 2017, Alastair Mactaggart and Mary Ross proposed a statewide ballot initiative entitled the “California Consumer Privacy Act.” Ballot initiatives are a process under California law in which private citizens can propose legislation directly to voters, and pursuant to which such legislation can be enacted through voter approval without any action by the state legislature or the governor. While the proposed privacy initiative was initially met with significant opposition, particularly from large technology companies, some of that opposition faded in the wake of the Cambridge Analytica scandal and Mark Zuckerberg’s April 2018 testimony before Congress. By May 2018, the initiative appeared to have garnered sufficient support to appear on the November 2018 ballot. On June 21, 2018, the sponsors of the ballot initiative and state legislators then struck a deal: in exchange for withdrawing the initiative, the state legislature would pass an agreed version of the California Consumer Privacy Act. The initiative was withdrawn, and the state legislature passed (and the Governor signed) the CCPA on June 28, 2018.

Since then, it was substantially amended — that is, watered down — at the request of various surveillance capitalism companies. Enforcement was supposed to start this year, but we haven’t seen much yet.

And we could have had that ballot initiative.

It looks like Alastair Mactaggart and others are back.

Advocacy group Californians for Consumer Privacy, which started the push for a state-wide data privacy law, announced this week that it has the signatures it needs to get version 2.0 of its privacy rules on the US state’s ballot in November, and submitted its proposal to Sacramento.

This time the goal is to tighten up the rules that its previously ballot measure managed to get into law, despite the determined efforts of internet giants like Google and Facebook to kill it. In return for the legislation being passed, that ballot measure was dropped. Now, it looks like the campaigners are taking their fight to a people’s vote after all.

[…]

The new proposal would add more rights, including the use and sale of sensitive personal information, such as health and financial information, racial or ethnic origin, and precise geolocation. It would also triples existing fines for companies caught breaking the rules surrounding data on children (under 16s) and would require an opt-in to even collect such data.

The proposal would also give Californians the right to know when their information is used to make fundamental decisions about them, such as getting credit or employment offers. And it would require political organizations to divulge when they use similar data for campaigns.

And just to push the tech giants from fury into full-blown meltdown the new ballot measure would require any amendments to the law to require a majority vote in the legislature, effectively stripping their vast lobbying powers and cutting off the multitude of different ways the measures and its enforcement can be watered down within the political process.

I don’t know why they accepted the compromise in the first place. It was obvious that the legislative process would be hijacked by the powerful tech companies. I support getting this onto the ballot this year.

Powered by WPeMatico

Securing the Internet of Things through Class-Action Lawsuits

This law journal article discusses the role of class-action litigation to secure the Internet of Things.

Basically, the article postulates that (1) market realities will produce insecure IoT devices, and (2) political failures will leave that industry unregulated. Result: insecure IoT. It proposes proactive class action litigation against manufacturers of unsafe and unsecured IoT devices before those devices cause unnecessary injury or death. It’s a lot to read, but it’s an interesting take on how to secure this otherwise disastrously insecure world.

And it was inspired by my book, Click Here to Kill Everybody.

Powered by WPeMatico

Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­– using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­– and almost entirely unregulated ­– data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies –­ and governments ­– to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­– passed in Vermont in 2018 ­– that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

Powered by WPeMatico

How Privacy Laws Hurt Defendants

Rebecca Wexler has an interesting op-ed about an inadvertent harm that privacy laws can cause: while law enforcement can often access third-party data to aid in prosecution, the accused don’t have the same level of access to aid in their defense:

The proposed privacy laws would make this situation worse. Lawmakers may not have set out to make the criminal process even more unfair, but the unjust result is not surprising. When lawmakers propose privacy bills to protect sensitive information, law enforcement agencies lobby for exceptions so they can continue to access the information. Few lobby for the accused to have similar rights. Just as the privacy interests of poor, minority and heavily policed communities are often ignored in the lawmaking process, so too are the interests of criminal defendants, many from those same communities.

In criminal cases, both the prosecution and the accused have a right to subpoena evidence so that juries can hear both sides of the case. The new privacy bills need to ensure that law enforcement and defense investigators operate under the same rules when they subpoena digital data. If lawmakers believe otherwise, they should have to explain and justify that view.

For more detail, see her paper.

Powered by WPeMatico

The Importance of Protecting Cybersecurity Whistleblowers

Interesting essay arguing that we need better legislation to protect cybersecurity whistleblowers.

Congress should act to protect cybersecurity whistleblowers because information security has never been so important, or so challenging. In the wake of a barrage of shocking revelations about data breaches and companies mishandling of customer data, a bipartisan consensus has emerged in support of legislation to give consumers more control over their personal information, require companies to disclose how they collect and use consumer data, and impose penalties for data breaches and misuse of consumer data. The Federal Trade Commission (“FTC”) has been held out as the best agency to implement this new regulation. But for any such legislation to be effective, it must protect the courageous whistleblowers who risk their careers to expose data breaches and unauthorized use of consumers’ private data.

Whistleblowers strengthen regulatory regimes, and cybersecurity regulation would be no exception. Republican and Democratic leaders from the executive and legislative branches have extolled the virtues of whistleblowers. High-profile cases abound. Recently, Christopher Wylie exposed Cambridge Analytica’s misuse of Facebook user data to manipulate voters, including its apparent theft of data from 50 million Facebook users as part of a psychological profiling campaign. Though additional research is needed, the existing empirical data reinforces the consensus that whistleblowers help prevent, detect, and remedy misconduct. Therefore it is reasonable to conclude that protecting and incentivizing whistleblowers could help the government address the many complex challenges facing our nation’s information systems.

Powered by WPeMatico

Major Tech Companies Finally Endorse Federal Privacy Regulation

The major tech companies, scared that states like California might impose actual privacy regulations, have now decided that they can better lobby the federal government for much weaker national legislation that will preempt any stricter state measures.

I’m sure they’ll still do all they can to weaken the California law, but they know they’ll do better at the national level.

Powered by WPeMatico

California Passes New Privacy Law

The California legislature unanimously passed the strongest data privacy law in the nation. This is great news, but I have a lot of reservations. The Internet tech companies pressed to get this law passed out of self-defense. A ballot initiative was already going to be voted on in November, one with even stronger data privacy protections. The author of that initiative agreed to pull it if the legislature passed something similar, and that’s why it did. This law doesn’t take effect until 2020, and that gives the legislature a lot of time to amend the law before it actually protects anyone’s privacy. And a conventional law is much easier to amend than a ballot initiative. Just as the California legislature gutted its net neutrality law in committee at the behest of the telcos, I expect it to do the same with this law at the behest of the Internet giants.

So: tentative hooray, I guess.

Powered by WPeMatico