BankInfoSecurity.com Interview with Aaron Emigh
• Latest news on the crimeware and phishing fronts
• Why average users can’t always sniff out those phishy emails
• Other cybercrime that financial institutions should be worried about
• Strong authentication - is it helping? What needs to be done further
Aaron Emigh is a well-known expert in information security. He is the author of the U.S. Secret Service San Francisco Electronic Crimes Task Force Report on anti-phishing technology, as well as the reports on online identity theft countermeasures and crimeware from the U.S. Department of Homeland Security. Aaron has been involved as a consultant in anti-spam and anti-phishing technologies for several years and has presented security research at numerous conferences and research forums. Most recently, he contributed several chapters to phishing and countermeasures published this year by Wiley Publishing.
LINDA MCGLASSON: Aaron, can you tell our audience what’s the latest news on the crimeware and phishing fronts these days?
AARON EMIGH: I think the biggest thing that’s happening right now is the ongoing transition from purely deception-based attacks where you’re getting an email that just pretends to be from your bank, to very sophisticated crimeware which provides all kinds of different attack factors on a user, in which credentials can be stolen, transactions can be generated, DNS can be hijacked so that you’re going to your bank site and you actually end up somewhere else, even if you’re doing the right thing as a user and so on. So, we’re seeing on the conventional deception attacks that they’re using blacklist busting URLs. It’s a game of Whack-a-Mole where blacklisting the phishing toolbars and so on are being integrated into browsers, so what’s happening is phishers are using unique subdomains for each email or for groups of emails to avoid being put on the blacklist. We’re seeing more pharming attacks, we’re seeing man-in-the-middle attacks, which will render the tokens that are used for two factor authentications significantly less effective, for example, and we’re seeing a lot of work being done in research on wireless-based attacks, for example, attacks in which a wireless router with a default password could be reprogrammed using a malicious java script to point to a DNS server, which would direct you, instead of to your bank site, to a site that has nothing to do with your bank, and using java script only, you can actually get any user that you can lure onto a malicious website to have their DNS compromised to enable pharming. So, I think there are some interesting things happening right now, and there are even scarier attacks on the horizon. One other thing I’d point to is, what a lot of people are calling spear-phishing, which is more targeted phishing attacks. So we saw this, for example, in one case where a DSL store was broken into and its customer database was compromised. Well, once you have not only email addresses, but where those email addresses came from, you can then craft a very specific attack. So in these cases, these people were sent emails saying, there’s a problem with your order, and they were able to show real order information, and then you have to come to this website to input additional information, and then they would phish them on that website. Well, since it was a real order that they had placed, that was a very convincing token of credibility, and a lot of people got caught in that. And I think we’ll see more of these kinds of composition attacks.
LINDA MCGLASSON: What are some of the reasons that that average user or reader of emails out there can’t always sniff out those phishy emails?
AARON EMIGH: It’s an interesting question because it involves a lot of different factors, I think the first thing to say is that users don’t really understand the finer points of authentication, of knowing that they’re on the right website and so on, or that they’re looking at an email from the right party. And I would argue that users should not have to understand and can’t be expected to understand them, so I don’t think that just a reliance on educating users on the assumption that it’s these dumb users that’s the problem is going to be successful. I don’t think users are dumb, I just think that things are not well set up for users to easily understand things.
Simple example, if you’re in the real physical world, and you’re looking at a building that says it’s a bank, it’s pretty easy to visually tell with pretty decent reliability whether it’s really a bank. Is it a big, gleaming edifice of marble? Well, if so, it probably is. If it’s a guy on a street corner with a cardboard box, it probably isn’t a bank. And one of the problems that we have is that the online equivalent of the guy on the corner with a box can steal the bank website and look exactly the same as the bank, and the only differences are things that are very obscure to the user. They’re things like, oh, well, look for the SSL lock icon in a very particular location in the chrome. Well, it turns out users don’t know that it matters where the lock icon is. If it occurs on the page, it also gives them an increased sense of security. If it’s the favicon and it appears on the left side of the URL instead of the right side of the URL, they don’t know how to distinguish those kinds of things. A lot of our security indicia are not designed for human recognition. We have evolved over a very long period of time to make very sophisticated trust decisions in the offline world, and the online world has done a very poor job so far, and in fact the general technology arena has done a very poor job so far of helping users to figure out what they should trust online. I think financial institutions compound the problem by employing very poor practices in a lot of their customer communication. I’m talking about things like emails from banks that contain clickable links, where the links are obfuscated, really long links that are hard for a user to understand, and they sometimes don’t even go to a domain that you’d expect for a bank. They go off to some strange domain name that looks a little bit like a phishing domain or just like a phishing domain. They don’t even visibly use SSL on the login screen, hard though it is for users to understand it, it’s easier if it’s generally used. And oftentimes the login screen does not use this SSL for the submitted form data with a username and password, but the user can’t see the lock when it’s actually entering the data. Users tend to learn from what they do, rather than from what they’re told, so they’ll learn a lot better from good practices being followed by a financial institution and seeing a deviation from that phishing site than they will from just being told by a financial institutions what to do. So I think that’s kind of a rundown of some of the reasons that the situation is difficult for users.