SSL and internet security news

cybersecurity

Auto Added by WPeMatico

Is 85% of US Critical Infrastructure in Private Hands?

Most US critical infrastructure is run by private corporations. This has major security implications, because it’s putting a random power company in — say — Ohio — up against the Russian cybercommand, which isn’t a fair fight.

When this problem is discussed, people regularly quote the statistic that 85% of US critical infrastructure is in private hands. It’s a handy number, and matches our intuition. Still, I have never been able to find a factual basis, or anyone who knows where the number comes from. Paul Rosenzweig investigates, and reaches the same conclusion.

So we don’t know the percentage, but I think we can safely say that it’s a lot.

Powered by WPeMatico

New US Executive Order on Cybersecurity

President Biden signed an executive order to improve government cybersecurity, setting new security standards for software sold to the federal government.

For the first time, the United States will require all software purchased by the federal government to meet, within six months, a series of new cybersecurity standards. Although the companies would have to “self-certify,” violators would be removed from federal procurement lists, which could kill their chances of selling their products on the commercial market.

I’m a big fan of these sorts of measures. The US government is a big enough market that vendors will try to comply with procurement regulations, and the improvements will benefit all customers of the software.

More news articles.

Powered by WPeMatico

Teaching Cybersecurity to Children

A new draft of an Australian educational curriculum proposes teaching children as young as five cybersecurity:

The proposed curriculum aims to teach five-year-old children — an age at which Australian kids first attend school — not to share information such as date of birth or full names with strangers, and that they should consult parents or guardians before entering personal information online.

Six-and-seven-year-olds will be taught how to use usernames and passwords, and the pitfalls of clicking on pop-up links to competitions.

By the time kids are in third and fourth grade, they’ll be taught how to identify the personal data that may be stored by online services, and how that can reveal their location or identity. Teachers will also discuss “the use of nicknames and why these are important when playing online games.”

By late primary school, kids will be taught to be respectful online, including “responding respectfully to other people’s opinions even if they are different from personal opinions.”

I have mixed feeling about this. Norms around these things are changing so fast, and it’s not likely that we in the older generation will get to dictate what the younger generation does. But these sorts of online privacy conversations are worth having around the same time children learn about privacy in other contexts.

Powered by WPeMatico

DNI’s Annual Threat Assessment

The office of the Director of National Intelligence released its “Annual Threat Assessment of the U.S. Intelligence Community.” Cybersecurity is covered on pages 20-21. Nothing surprising:

  • Cyber threats from nation states and their surrogates will remain acute.
  • States’ increasing use of cyber operations as a tool of national power, including increasing use by militaries around the world, raises the prospect of more destructive and disruptive cyber activity.
  • Authoritarian and illiberal regimes around the world will increasingly exploit digital tools to surveil their citizens, control free expression, and censor and manipulate information to maintain control over their populations.
  • During the last decade, state sponsored hackers have compromised software and IT service supply chains, helping them conduct operations — espionage, sabotage, and potentially prepositioning for warfighting.

The supply chain line is new; I hope the government is paying attention.

Powered by WPeMatico

More on the Chinese Zero-Day Microsoft Exchange Hack

Nick Weaver has an excellent post on the Microsoft Exchange hack:

The investigative journalist Brian Krebs has produced a handy timeline of events and a few things stand out from the chronology. The attacker was first detected by one group on Jan. 5 and another on Jan. 6, and Microsoft acknowledged the problem immediately. During this time the attacker appeared to be relatively subtle, exploiting particular targets (although we generally lack insight into who was targeted). Microsoft determined on Feb. 18 that it would patch these vulnerabilities on the March 9th “Patch Tuesday” release of fixes.

Somehow, the threat actor either knew that the exploits would soon become worthless or simply guessed that they would. So, in late February, the attacker changed strategy. Instead of simply exploiting targeted Exchange servers, the attackers stepped up their pace considerably by targeting tens of thousands of servers to install the web shell, an exploit that allows attackers to have remote access to a system. Microsoft then released the patch with very little warning on Mar. 2, at which point the attacker simply sought to compromise almost every vulnerable Exchange server on the Internet. The result? Virtually every vulnerable mail server received the web shell as a backdoor for further exploitation, making the patch effectively useless against the Chinese attackers; almost all of the vulnerable systems were exploited before they were patched.

This is a rational strategy for any actor who doesn’t care about consequences. When a zero-day is confidential and undiscovered, the attacker tries to be careful, only using it on attackers of sufficient value. But if the attacker knows or has reason to believe their vulnerabilities may be patched, they will increase the pace of exploits and, once a patch is released, there is no reason to not try to exploit everything possible.

We know that Microsoft shares advance information about updates with some organizations. I have long believed that they give the NSA a few weeks’ notice to do basically what the Chinese did: use the exploit widely, because you don’t have to worry about losing the capability.

Estimates on the number of affected networks continues to rise. At least 30,000 in the US, and 100,000 worldwide. More?

And the vulnerabilities:

The Chinese actors were not using a single vulnerability but actually a sequence of four “zero-day” exploits. The first allowed an unauthorized user to basically tell the server “let me in, I’m the server” by tricking the server into contacting itself. After the unauthorized user gained entry, the hacker could use the second vulnerability, which used a malformed voicemail that, when interpreted by the server, allowed them to execute arbitrary commands. Two further vulnerabilities allow the attacker to write new files, which is a common primitive that attackers use to increase their access: An attacker uses a vulnerability to write a file and then uses the arbitrary command execution vulnerability to execute that file.

Using this access, the attackers could read anybody’s email or indeed take over the mail server completely. Critically, they would almost always do more, introducing a “web shell,” a program that would enable further remote exploitation even if the vulnerabilities are patched.

The details of that web shell matter. If it was sophisticated, it implies that the Chinese hackers were planning on installing it from the beginning of the operation. If it’s kind of slapdash, it implies a last-minute addition when they realized their exploit window was closing.

Now comes the criminal attacks. Any unpatched network is still vulnerable, and we know from history that lots of networks will remain vulnerable for a long time. Expect the ransomware gangs to weaponize this attack within days.

Powered by WPeMatico

National Security Risks of Late-Stage Capitalism

Early in 2020, cyberspace attackers apparently working for the Russian government compromised a piece of widely used network management software made by a company called SolarWinds. The hack gave the attackers access to the computer networks of some 18,000 of SolarWinds’s customers, including US government agencies such as the Homeland Security Department and State Department, American nuclear research labs, government contractors, IT companies and nongovernmental agencies around the world.

It was a huge attack, with major implications for US national security. The Senate Intelligence Committee is scheduled to hold a hearing on the breach on Tuesday. Who is at fault?

The US government deserves considerable blame, of course, for its inadequate cyberdefense. But to see the problem only as a technical shortcoming is to miss the bigger picture. The modern market economy, which aggressively rewards corporations for short-term profits and aggressive cost-cutting, is also part of the problem: Its incentive structure all but ensures that successful tech companies will end up selling insecure products and services.

Like all for-profit corporations, SolarWinds aims to increase shareholder value by minimizing costs and maximizing profit. The company is owned in large part by Silver Lake and Thoma Bravo, private-equity firms known for extreme cost-cutting.

SolarWinds certainly seems to have underspent on security. The company outsourced much of its software engineering to cheaper programmers overseas, even though that typically increases the risk of security vulnerabilities. For a while, in 2019, the update server’s password for SolarWinds’s network management software was reported to be “solarwinds123.” Russian hackers were able to breach SolarWinds’s own email system and lurk there for months. Chinese hackers appear to have exploited a separate vulnerability in the company’s products to break into US government computers. A cybersecurity adviser for the company said that he quit after his recommendations to strengthen security were ignored.

There is no good reason to underspend on security other than to save money — especially when your clients include government agencies around the world and when the technology experts that you pay to advise you are telling you to do more.

As the economics writer Matt Stoller has suggested, cybersecurity is a natural area for a technology company to cut costs because its customers won’t notice unless they are hacked ­– and if they are, they will have already paid for the product. In other words, the risk of a cyberattack can be transferred to the customers. Doesn’t this strategy jeopardize the possibility of long-term, repeat customers? Sure, there’s a danger there –­ but investors are so focused on short-term gains that they’re too often willing to take that risk.

The market loves to reward corporations for risk-taking when those risks are largely borne by other parties, like taxpayers. This is known as “privatizing profits and socializing losses.” Standard examples include companies that are deemed “too big to fail,” which means that society as a whole pays for their bad luck or poor business decisions. When national security is compromised by high-flying technology companies that fob off cybersecurity risks onto their customers, something similar is at work.

Similar misaligned incentives affect your everyday cybersecurity, too. Your smartphone is vulnerable to something called SIM-swap fraud because phone companies want to make it easy for you to frequently get a new phone — and they know that the cost of fraud is largely borne by customers. Data brokers and credit bureaus that collect, use, and sell your personal data don’t spend a lot of money securing it because it’s your problem if someone hacks them and steals it. Social media companies too easily let hate speech and misinformation flourish on their platforms because it’s expensive and complicated to remove it, and they don’t suffer the immediate costs ­– indeed, they tend to profit from user engagement regardless of its nature.

There are two problems to solve. The first is information asymmetry: buyers can’t adequately judge the security of software products or company practices. The second is a perverse incentive structure: the market encourages companies to make decisions in their private interest, even if that imperils the broader interests of society. Together these two problems result in companies that save money by taking on greater risk and then pass off that risk to the rest of us, as individuals and as a nation.

The only way to force companies to provide safety and security features for customers and users is with government intervention. Companies need to pay the true costs of their insecurities, through a combination of laws, regulations, and legal liability. Governments routinely legislate safety — pollution standards, automobile seat belts, lead-free gasoline, food service regulations. We need to do the same with cybersecurity: the federal government should set minimum security standards for software and software development.

In today’s underregulated markets, it’s just too easy for software companies like SolarWinds to save money by skimping on security and to hope for the best. That’s a rational decision in today’s free-market world, and the only way to change that is to change the economic incentives.

This essay previously appeared in the New York Times.

Powered by WPeMatico

GPS Vulnerabilities

Really good op-ed in the New York Times about how vulnerable the GPS system is to interference, spoofing, and jamming — and potential alternatives.

The 2018 National Defense Authorization Act included funding for the Departments of Defense, Homeland Security and Transportation to jointly conduct demonstrations of various alternatives to GPS, which were concluded last March. Eleven potential systems were tested, including eLoran, a low-frequency, high-power timing and navigation system transmitted from terrestrial towers at Coast Guard facilities throughout the United States.

“China, Russia, Iran, South Korea and Saudi Arabia all have eLoran systems because they don’t want to be as vulnerable as we are to disruptions of signals from space,” said Dana Goward, the president of the Resilient Navigation and Timing Foundation, a nonprofit that advocates for the implementation of an eLoran backup for GPS.

Also under consideration by federal authorities are timing systems delivered via fiber optic network and satellite systems in a lower orbit than GPS, which therefore have a stronger signal, making them harder to hack. A report on the technologies was submitted to Congress last week.

GPS is a piece of our critical infrastructure that is essential to a lot of the rest of our critical infrastructure. It needs to be more secure.

Powered by WPeMatico

Chinese Supply-Chain Attack on Computer Systems

Bloomberg News has a major story about the Chinese hacking computer motherboards made by Supermicro, Levono, and others. It’s been going on since at least 2008. The US government has known about it for almost as long, and has tried to keep the attack secret:

China’s exploitation of products made by Supermicro, as the U.S. company is known, has been under federal scrutiny for much of the past decade, according to 14 former law enforcement and intelligence officials familiar with the matter. That included an FBI counterintelligence investigation that began around 2012, when agents started monitoring the communications of a small group of Supermicro workers, using warrants obtained under the Foreign Intelligence Surveillance Act, or FISA, according to five of the officials.

There’s lots of detail in the article, and I recommend that you read it through.

This is a follow on, with a lot more detail, to a story Bloomberg reported on in fall 2018. I didn’t believe the story back then, writing:

I don’t think it’s real. Yes, it’s plausible. But first of all, if someone actually surreptitiously put malicious chips onto motherboards en masse, we would have seen a photo of the alleged chip already. And second, there are easier, more effective, and less obvious ways of adding backdoors to networking equipment.

I seem to have been wrong. From the current Bloomberg story:

Mike Quinn, a cybersecurity executive who served in senior roles at Cisco Systems Inc. and Microsoft Corp., said he was briefed about added chips on Supermicro motherboards by officials from the U.S. Air Force. Quinn was working for a company that was a potential bidder for Air Force contracts, and the officials wanted to ensure that any work would not include Supermicro equipment, he said. Bloomberg agreed not to specify when Quinn received the briefing or identify the company he was working for at the time.

“This wasn’t a case of a guy stealing a board and soldering a chip on in his hotel room; it was architected onto the final device,” Quinn said, recalling details provided by Air Force officials. The chip “was blended into the trace on a multilayered board,” he said.

“The attackers knew how that board was designed so it would pass” quality assurance tests, Quinn said.

Supply-chain attacks are the flavor of the moment, it seems. But they’re serious, and very hard to defend against in our deeply international IT industry. (I have repeatedly called this an “insurmountable problem.”) Here’s me in 2018:

Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that. We can’t trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government.

We need some fundamental security research here. I wrote this in 2019:

The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, “You have to presume a dirty network.” Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it’s how we can have highly resilient distributed systems like Google’s network even though none of the individual components are particularly good. It’s also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.

It seems that supply-chain attacks are constantly in the news right now. That’s good. They’ve been a serious problem for a long time, and we need to take the threat seriously. For further reading, I strongly recommend this Atlantic Council report from last summer: “Breaking trust: Shades of crisis across an insecure software supply chain.

Powered by WPeMatico