SSL and internet security news

social engineering

Auto Added by WPeMatico

Problems with Multifactor Authentication

Roger Grimes on why multifactor authentication isn’t a panacea:

The first time I heard of this issue was from a Midwest CEO. His organization had been hit by ransomware to the tune of $10M. Operationally, they were still recovering nearly a year later. And, embarrassingly, it was his most trusted VP who let the attackers in. It turns out that the VP had approved over 10 different push-based messages for logins that he was not involved in. When the VP was asked why he approved logins for logins he was not actually doing, his response was, “They (IT) told me that I needed to click on Approve when the message appeared!”

And there you have it in a nutshell. The VP did not understand the importance (“the WHY”) of why it was so important to ONLY approve logins that they were participating in. Perhaps they were told this. But there is a good chance that IT, when implementinthe new push-based MFA, instructed them as to what they needed to do to successfully log in, but failed to mention what they needed to do when they were not logging in if the same message arrived. Most likely, IT assumed that anyone would naturally understand that it also meant not approving unexpected, unexplained logins. Did the end user get trained as to what to do when an unexpected login arrived? Were they told to click on “Deny” and to contact IT Help Desk to report the active intrusion?

Or was the person told the correct instructions for both approving and denying and it just did not take? We all have busy lives. We all have too much to do. Perhaps the importance of the last part of the instructions just did not sink in. We can think we hear and not really hear. We can hear and still not care.

Powered by WPeMatico

Using AI to Scale Spear Phishing

The problem with spear phishing it that it takes time and creativity to create individualized enticing phishing emails. Researchers are using GPT-3 to attempt to solve that problem:

The researchers used OpenAI’s GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to their colleagues’ backgrounds and traits. Machine learning focused on personality analysis aims to be predict a person’s proclivities and mentality based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that groomed and refined the emails before sending them out. They say that the results sounded “weirdly human” and that the platforms automatically supplied surprising specifics, like mentioning a Singaporean law when instructed to generate content for people living in Singapore.

While they were impressed by the quality of the synthetic messages and how many clicks they garnered from colleagues versus the human-composed ones, the researchers note that the experiment was just a first step. The sample size was relatively small and the target pool was fairly homogenous in terms of employment and geographic region. Plus, both the human-generated messages and those generated by the AI-as-a-service pipeline were created by office insiders rather than outside attackers trying to strike the right tone from afar.

It’s just a matter of time before this is really effective. Combine it with voice and video synthesis, and you have some pretty scary scenarios. The real risk isn’t that AI-generated phishing emails are as good as human-generated ones, it’s that they can be generated at much greater scale.

Defcon presentation and slides. Another news article

Powered by WPeMatico

Malware Hidden in Call of Duty Cheating Software

News article:

Most troublingly, Activision says that the “cheat” tool has been advertised multiple times on a popular cheating forum under the title “new COD hack.” (Gamers looking to flout the rules will typically go to such forums to find new ways to do so.) While the report doesn’t mention which forum they were posted on (that certainly would’ve been helpful), it does say that these offerings have popped up a number of times. They have also been seen advertised in YouTube videos, where instructions were provided on how gamers can run the “cheats” on their devices, and the report says that “comments [on the videos] seemingly indicate people had downloaded and attempted to use the tool.”

Part of the reason this attack could work so well is that game cheats typically require a user to disable key security features that would otherwise keep a malicious program out of their system. The hacker is basically getting the victim to do their own work for them.

“It is common practice when configuring a cheat program to run it the with the highest system privileges,” the report notes. “Guides for cheats will typically ask users to disable or uninstall antivirus software and host firewalls, disable kernel code signing, etc.”

Detailed report.

Powered by WPeMatico

Details of a Computer Banking Scam

This is a longish video that describes a profitable computer banking scam that’s run out of call centers in places like India. There’s a lot of fluff about glitterbombs and the like, but the details are interesting. The scammers convince the victims to give them remote access to their computers, and then that they’ve mistyped a dollar amount and have received a large refund that they didn’t deserve. Then they convince the victims to send cash to a drop site, where a money mule retrieves it and forwards it to the scammers.

I found it interesting for several reasons. One, it illustrates the complex business nature of the scam: there are a lot of people doing specialized jobs in order for it to work. Two, it clearly shows the psychological manipulation involved, and how it preys on the unsophisticated and vulnerable. And three, it’s an evolving tactic that gets around banks increasingly flagging blocking suspicious electronic transfers.

Powered by WPeMatico