SSL and internet security news

Security technology

Auto Added by WPeMatico

Half a Million IoT Device Passwords Published

It’s a list of easy-to-guess passwords for IoT devices on the Internet as recently as last October and November. Useful for anyone putting together a bot network:

A hacker has published this week a massive list of Telnet credentials for more than 515,000 servers, home routers, and IoT (Internet of Things) “smart” devices.

The list, which was published on a popular hacking forum, includes each device’s IP address, along with a username and password for the Telnet service, a remote access protocol that can be used to control devices over the internet.

According to experts to who ZDNet spoke this week, and a statement from the leaker himself, the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker than tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Powered by WPeMatico

Brazil Charges Glenn Greenwald with Cybercrimes

Glenn Greenwald has been charged with cybercrimes in Brazil, stemming from publishing information and documents that were embarrassing to the government. The charges are that he actively helped the people who actually did the hacking:

Citing intercepted messages between Mr. Greenwald and the hackers, prosecutors say the journalist played a “clear role in facilitating the commission of a crime.”

For instance, prosecutors contend that Mr. Greenwald encouraged the hackers to delete archives that had already been shared with The Intercept Brasil, in order to cover their tracks.

Prosecutors also say that Mr. Greenwald was communicating with the hackers while they were actively monitoring private chats on Telegram, a messaging app. The complaint charged six other individuals, including four who were detained last year in connection with the cellphone hacking.

This isn’t new, or unique to Brazil. Last year, Julian Assange was charged by the US with doing essentially the same thing with Chelsea Manning:

The indictment alleges that in March 2010, Assange engaged in a conspiracy with Chelsea Manning, a former intelligence analyst in the U.S. Army, to assist Manning in cracking a password stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a U.S. government network used for classified documents and communications. Manning, who had access to the computers in connection with her duties as an intelligence analyst, was using the computers to download classified records to transmit to WikiLeaks. Cracking the password would have allowed Manning to log on to the computers under a username that did not belong to her. Such a deceptive measure would have made it more difficult for investigators to determine the source of the illegal disclosures.

During the conspiracy, Manning and Assange engaged in real-time discussions regarding Manning’s transmission of classified records to Assange. The discussions also reflect Assange actively encouraging Manning to provide more information. During an exchange, Manning told Assange that “after this upload, that’s all I really have got left.” To which Assange replied, “curious eyes never run dry in my experience.”

Good commentary on the Assange case here.

It’s too early for any commentary on the Greenwald case. Lots of news articles are essentially saying the same thing. I’ll post more news when there is some.

Powered by WPeMatico

SIM Hijacking

SIM hijacking — or SIM swapping — is an attack where a fraudster contacts your cell phone provider and convinces them to switch your account to a phone that they control. Since your smartphone often serves as a security measure or backup verification system, this allows the fraudster to take over other accounts of yours. Sometimes this involves people inside the phone companies.

Phone companies have added security measures since this attack became popular and public, but a new study (news article) shows that the measures aren’t helping:

We examined the authentication procedures used by five pre-paid wireless carriers when a customer attempted to change their SIM card. These procedures are an important line of defense against attackers who seek to hijack victims’ phone numbers by posing as the victim and calling the carrier to request that service be transferred to a SIM card the attacker possesses. We found that all five carriers used insecure authentication challenges that could be easily subverted by attackers.We also found that attackers generally only needed to target the most vulnerable authentication challenges, because the rest could be bypassed.

It’s a classic security vs. usability trade-off. The phone companies want to provide easy customer service for their legitimate customers, and that system is what’s being exploited by the SIM hijackers. Companies could make the fraud harder, but it would necessarily also make it harder for legitimate customers to modify their accounts.

Powered by WPeMatico

Clearview AI and Facial Recognition

The New York Times has a long story about Clearview AI, a small company that scrapes identified photos of people from pretty much everywhere, and then uses unstated magical AI technology to identify people in other photos.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

[…]

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

Another article.

Powered by WPeMatico

Friday Squid Blogging: Giant Squid Genome Analyzed

This is fantastic work:

In total, the researchers identified approximately 2.7 billion DNA base pairs, which is around 90 percent the size of the human genome. There’s nothing particularly special about that size, especially considering that the axolotl genome is 10 times larger than the human genome. It’s going to take some time to fully understand and appreciate the intricacies of the giant squid’s genetic profile, but these preliminary results are already helping to explain some of its more remarkable features.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Powered by WPeMatico

Critical Windows Vulnerability Discovered by NSA

Yesterday’s Microsoft Windows patches included a fix for a critical vulnerability in the system’s crypto library.

A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.

An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.

A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

That’s really bad, and you should all patch your system right now, before you finish reading this blog post.

This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.

Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:

  • HTTPS connections
  • Signed files and emails
  • Signed executable code launched as user-mode processes

The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors. NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners.

Early yesterday morning, NSA’s Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and — to my shock — took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation’s cyberweapons stash — my personal favorite theory — she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time the NSA sent Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that the NSA is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.

Barring any other information, I would take the NSA at its word here. So, good for it.

And — seriously — patch your systems now: Windows 10 and Windows Server 2016/2019. Assume that this vulnerability has already been weaponized, probably by criminals and certainly by major governments. Even assume that the NSA is using this vulnerability — why wouldn’t it?

Ars Technica article. Wired article. CERT advisory.

EDITED TO ADD: Washington Post article.

Powered by WPeMatico

Artificial Personas and Public Discourse

Presidential campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites. It’s writing convincing op-eds on topics in the news (though there are limitations). And it’s being used to bulk up “pink-slime journalism” — websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There’s a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them — maybe half — were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn’t stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won’t be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi “We all have trust in Mohammed bin Salman” tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos — sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there’s commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it’ll be domestic political groups. Maybe it’ll be the candidates themselves. Most likely, it’ll be everybody. The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots — a proposed California law would require bots to identify themselves — but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don’t have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that’ll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it’s becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

This essay previously appeared in TheAtlantic.com.

EDITED TO ADD: Jamie Susskind wrote a similar essay.

Powered by WPeMatico

Police Surveillance Tools from Special Services Group

Special Services Group, a company that sells surveillance tools to the FBI, DEA, ICE, and other US government agencies, has had its secret sales brochure published. Motherboard received the brochure as part of a FOIA request to the Irvine Police Department in California.

“The Tombstone Cam is our newest video concealment offering the ability to conduct remote surveillance operations from cemeteries,” one section of the Black Book reads. The device can also capture audio, its battery can last for two days, and “the Tombstone Cam is fully portable and can be easily moved from location to location as necessary,” the brochure adds. Another product is a video and audio capturing device that looks like an alarm clock, suitable for “hotel room stings,” and other cameras are designed to appear like small tree trunks and rocks, the brochure reads.

The “Shop-Vac Covert DVR Recording System” is essentially a camera and 1TB harddrive hidden inside a vacuum cleaner. “An AC power connector is available for long-term deployments, and DC power options can be connected for mobile deployments also,” the brochure reads. The description doesn’t say whether the vacuum cleaner itself works.

[…]

One of the company’s “Rapid Vehicle Deployment Kits” includes a camera hidden inside a baby car seat. “The system is fully portable, so you are not restricted to the same drop car for each mission,” the description adds.

[…]

The so-called “K-MIC In-mouth Microphone & Speaker Set” is a tiny Bluetooth device that sits on a user’s teeth and allows them to “communicate hands-free in crowded, noisy surroundings” with “near-zero visual indications,” the Black Book adds.

Other products include more traditional surveillance cameras and lenses as well as tools for surreptitiously gaining entry to buildings. The “Phantom RFID Exploitation Toolkit” lets a user clone an access card or fob, and the so-called “Shadow” product can “covertly provide the user with PIN code to an alarm panel,” the brochure reads.

The Motherboard article also reprints the scary emails Motherboard received from Special Services Group, when asked for comment. Of course, Motherboard published the information anyway.

Powered by WPeMatico