SSL and internet security news

military

Auto Added by WPeMatico

Vulnerabilities in Weapons Systems

“If you think any of these systems are going to work as expected in wartime, you’re fooling yourself.”

That was Bruce’s response at a conference hosted by US Transportation Command in 2017, after learning that their computerized logistical systems were mostly unclassified and on the Internet. That may be necessary to keep in touch with civilian companies like FedEx in peacetime or when fighting terrorists or insurgents. But in a new era facing off with China or Russia, it is dangerously complacent.

Any twenty-first century war will include cyber operations. Weapons and support systems will be successfully attacked. Rifles and pistols won’t work properly. Drones will be hijacked midair. Boats won’t sail, or will be misdirected. Hospitals won’t function. Equipment and supplies will arrive late or not at all.

Our military systems are vulnerable. We need to face that reality by halting the purchase of insecure weapons and support systems and by incorporating the realities of offensive cyberattacks into our military planning.

Over the past decade, militaries have established cyber commands and developed cyberwar doctrine. However, much of the current discussion is about offense. Increasing our offensive capabilities without being able to secure them is like having all the best guns in the world, and then storing them in an unlocked, unguarded armory. They just won’t be stolen; they’ll be subverted.

During that same period, we’ve seen increasingly brazen cyberattacks by everyone from criminals to governments. Everything is now a computer, and those computers are vulnerable. Cars, medical devices, power plants, and fuel pipelines have all been targets. Military computers, whether they’re embedded inside weapons systems or on desktops managing the logistics of those weapons systems, are similarly vulnerable. We could see effects as stodgy as making a tank impossible to start up, or sophisticated as retargeting a missile midair.

Military software is unlikely to be any more secure than commercial software. Although sensitive military systems rely on domestically manufactured chips as part of the Trusted Foundry program, many military systems contain the same foreign chips and code that commercial systems do: just like everyone around the world uses the same mobile phones, networking equipment, and computer operating systems. For example, there has been serious concern over Chinese-made 5G networking equipment that might be used by China to install “backdoors” that would allow the equipment to be controlled. This is just one of many risks to our normal civilian computer supply chains. And since military software is vulnerable to the same cyberattacks as commercial software, military supply chains have many of the same risks.

This is not speculative. A 2018 GAO report expressed concern regarding the lack of secure and patchable US weapons systems. The report observed that “in operational testing, the [Department of Defense] routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic.” It’s a similar attitude to corporate executives who believe that they can’t be hacked — and equally naive.

An updated GAO report from earlier this year found some improvements, but the basic problem remained: “DOD is still learning how to contract for cybersecurity in weapon systems, and selected programs we reviewed have struggled to incorporate systems’ cybersecurity requirements into contracts.” While DOD now appears aware of the issue of lack of cybersecurity requirements, they’re still not sure yet how to fix it, and in three of the five cases GAO reviewed, DOD simply chose to not include the requirements at all.

Militaries around the world are now exploiting these vulnerabilities in weapons systems to carry out operations. When Israel in 2007 bombed a Syrian nuclear reactor, the raid was preceded by what is believed to have been a cyber attack on Syrian air defenses that resulted in radar screens showing no threat as bombers zoomed overhead. In 2018, a 29-country NATO exercise, Trident Juncture, that included cyberweapons was disrupted by Russian GPS jamming. NATO does try to test cyberweapons outside such exercises, but has limited scope in doing so. In May, Jens Stoltenberg, the NATO secretary-general, said that “NATO computer systems are facing almost daily cyberattacks.”

The war of the future will not only be about explosions, but will also be about disabling the systems that make armies run. It’s not (solely) that bases will get blown up; it’s that some bases will lose power, data, and communications. It’s not that self-driving trucks will suddenly go mad and begin rolling over friendly soldiers; it’s that they’ll casually roll off roads or into water where they sit, rusting, and in need of repair. It’s not that targeting systems on guns will be retargeted to 1600 Pennsylvania Avenue; it’s that many of them could simply turn off and not turn back on again.

So, how do we prepare for this next war? First, militaries need to introduce a little anarchy into their planning. Let’s have wargames where essential systems malfunction or are subverted­not all of the time, but randomly. To help combat siloed military thinking, include some civilians as well. Allow their ideas into the room when predicting potential enemy action. And militaries need to have well-developed backup plans, for when systems are subverted. In Joe Haldeman’s 1975 science-fiction novel The Forever War, he postulated a “stasis field” that forced his space marines to rely on nothing more than Roman military technologies, like javelins. We should be thinking in the same direction.

NATO isn’t yet allowing civilians not employed by NATO or associated military contractors access to their training cyber ranges where vulnerabilities could be discovered and remediated before battlefield deployment. Last year, one of us (Tarah) was listening to a NATO briefing after the end of the 2020 Cyber Coalition exercises, and asked how she and other information security researchers could volunteer to test cyber ranges used to train its cyber incident response force. She was told that including civilians would be a “welcome thought experiment in the tabletop exercises,” but including them in reality wasn’t considered. There is a rich opportunity for improvement here, providing transparency into where improvements could be made.

Second, it’s time to take cybersecurity seriously in military procurement, from weapons systems to logistics and communications contracts. In the three year span from the original 2018 GAO report to this year’s report, cybersecurity audit compliance went from 0% to 40% (those 2 of 5 programs mentioned earlier). We need to get much better. DOD requires that its contractors and suppliers follow the Cybersecurity Maturity Model Certification process; it should abide by the same standards. Making those standards both more rigorous and mandatory would be an obvious second step.

Gone are the days when we can pretend that our technologies will work in the face of a military cyberattack. Securing our systems will make everything we buy more expensive — maybe a lot more expensive. But the alternative is no longer viable.

The future of war is cyberwar. If your weapons and systems aren’t secure, don’t even bother bringing them onto the battlefield.

This essay was written with Tarah Wheeler, and previously appeared in Brookings TechStream.

Powered by WPeMatico

Hacking Weapons Systems

Lukasz Olejnik has a good essay on hacking weapons systems.

Basically, there is no reason to believe that software in weapons systems is any more vulnerability free than any other software. So now the question is whether the software can be accessed over the Internet. Increasingly, it is. This is likely to become a bigger problem in the near future. We need to think about future wars where the tech simply doesn’t work.

Powered by WPeMatico

The NSA on the Risks of Exposing Location Data

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

    Of course, turning off your wireless devices is itself a signal that something is going on. It’s hard to be clandestine in our always connected world.

    News articles.

    Powered by WPeMatico

    Attacking Soldiers on Social Media

    A research group at NATO’s Strategic Communications Center of Excellence catfished soldiers involved in an European military exercise — we don’t know what country they were from — to demonstrate the power of the attack technique.

    Over four weeks, the researchers developed fake pages and closed groups on Facebook that looked like they were associated with the military exercise, as well as profiles impersonating service members both real and imagined.

    To recruit soldiers to the pages, they used targeted Facebook advertising. Those pages then promoted the closed groups the researchers had created. Inside the groups, the researchers used their phony accounts to ask the real service members questions about their battalions and their work. They also used these accounts to “friend” service members. According to the report, Facebook’s Suggested Friends feature proved helpful in surfacing additional targets.

    The researchers also tracked down service members’ Instagram and Twitter accounts and searched for other information available online, some of which a bad actor might be able to exploit. “We managed to find quite a lot of data on individual people, which would include sensitive information,” Biteniece says. “Like a serviceman having a wife and also being on dating apps.”

    By the end of the exercise, the researchers identified 150 soldiers, found the locations of several battalions, tracked troop movements, and compelled service members to engage in “undesirable behavior,” including leaving their positions against orders.

    “Every person has a button. For somebody there’s a financial issue, for somebody it’s a very appealing date, for somebody it’s a family thing,” Sarts says. “It’s varied, but everybody has a button. The point is, what’s openly available online is sufficient to know what that is.”

    This is the future of warfare. It’s one of the reasons China stole all of that data from the Office of Personal Management. If indeed a country’s intelligence service was behind the Equifax attack, this is why they did it.

    Go back and read this scenario from the Center for Strategic and International Studies. Why wouldn’t a country intent on starting a war do it that way?

    Powered by WPeMatico

    Military Carrier Pigeons in the Era of Electronic Warfare

    They have advantages:

    Pigeons are certainly no substitute for drones, but they provide a low-visibility option to relay information. Considering the storage capacity of microSD memory cards, a pigeon’s organic characteristics provide front line forces a relatively clandestine mean to transport gigabytes of video, voice, or still imagery and documentation over considerable distance with zero electromagnetic emissions or obvious detectability to radar. These decidedly low-technology options prove difficult to detect and track. Pigeons cannot talk under interrogation, although they are not entirely immune to being held under suspicion of espionage. Within an urban environment, a pigeon has even greater potential to blend into the local avian population, further compounding detection.

    The author points out that both France and China still maintain a small number of pigeons in case electronic communications are disrupted.

    And there’s an existing RFC.

    Powered by WPeMatico

    How the US Military Can Better Keep Hackers

    Interesting commentary:

    The military is an impossible place for hackers thanks to antiquated career management, forced time away from technical positions, lack of mission, non-technical mid- and senior-level leadership, and staggering pay gaps, among other issues.

    It is possible the military needs a cyber corps in the future, but by accelerating promotions, offering graduate school to newly commissioned officers, easing limited lateral entry for exceptional private-sector talent, and shortening the private/public pay gap, the military can better accommodate its most technical members now.

    The model the author uses is military doctors.

    Powered by WPeMatico

    Supply-Chain Security

    Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.

    It’s a legitimate fear, and perhaps a prudent action. But it’s just one instance of the much larger issue of securing our supply chains.

    All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.

    In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add “backdoors” to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It’s relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.

    This isn’t the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: “Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems.”

    Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country’s tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn’t trust another country, then it can’t trust that country’s computer products.

    But this trust isn’t limited to the country where the company is based. We have to trust the country where the software is written — and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips ­ without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.

    We also have to trust the programmers. Today’s large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries’ citizens are writing software for Apple or Microsoft or Google.

    We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.

    In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.

    I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that. We can’t trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.

    We don’t know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don’t know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It’s doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can’t buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we’re largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.

    Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn’t get too far out of control.

    This essay previously appeared in the Washington Post.

    Powered by WPeMatico