Holiday Trojans and Botnets to be aware of: Storm 3.0 and First Android Botnet


Every year around the holiday season, the malware writers and botnet operators get busy, attempting to infect new machines and recruit more unsuspecting user machines.

This year, so far there have been two interesting events:

– Mobile security company, Lookout discovered a malware namely Geinimi, that gets downloaded to your Android smartphone if you use an unofficial marketplace (currently Chinese app marketplaces) to download games such as Monkey Jump 2, Sex Positions, President vs. Aliens, City Defense and Baseball Superstars 2010. This malware or trojan then attempts to connect to several domain names such as http://www.widifu.com, http://www.udaore.com, http://www.frijd.com, http://www.islpast.com and http://www.piajesj.com to upload users’ private information including fine-grained location, device identifiers, etc.

While no one has seen the affected Android devices receiving commands from the Command-and-Control server yet, however, I strongly suspect that within the near future, we will begin seeing the first Android phone botnet. With smartphone based advertisements and location based advertising becoming ever popular, the first goal could be a new form of click fraud, where these Android devices get recruited to click on adverts. More details at Lookout’s blog.

– Storm botnet’s version 3.0 is out. It is spreading via e-mails like the earlier versions of Storm. Affected users are then prompted to download a fake flash player at which point malware gets installed on their machines. Thereafter, these new bot machines connect to domain names whose hosting IP-address is constantly being changed via IP fast-flux. Nothing new here, as we have seen all the common exploit mechanisms. But users must be careful not to be tempted to click on any suspicious emails and most importantly, be tricked in to installing fake flash players. More details at Shadowserver’s blog.

Stay safe, and enjoy the holidays!

Advertisements
Posted in botnets, Uncategorized | Leave a comment

Skype supernode oscillation raises questions


Skype may have built a scalable network, however, they still have their work cut out for adding fault-tolerance, resilience and security in their infrastructure as their most recent outage over Christmas eve shows. This was their first outage in the last three years, and unluckily for them, came on heels of a planned IPO, and I am sure their brand identity took some hit.

The event is a classic example of all that can go wrong by relying on user machines while building a critical piece of your infrastructure (in their case, Supernodes). Hopefully, Skype and other services will realize that they are increasingly becoming similar to ISPs and hence must incorporate network monitoring and security solutions in to their infrastructure.

The following article from Skype provides a detailed analysis of what went wrong. It is very much like dominoes falling over each other. First, a few Skype servers got overloaded which led to some Skype client versions hanging while waiting for the server to reply and thereby crashing. This further led to the Skype clients rebooting and then attempting to connect to supernodes, all at once, whereby the supernodes themselves got overloaded and crashed. Then the user clients attempted to switch back to alternate supernodes which also consequently crashed.

Can attackers bring down Skype via DDoS?

This incident shows an Achilles heel in Skype’s infrastructure that could be taken advantage of by a clever adversary. While the current outage was a classic case of “flash crowd” effect, all flash crowds can be converted in to a DDoS attack by a clever adversary. For instance, an adversary only has to make sure he overloads a set of supernodes by figuring out which users’ machines are supernodes and sending them a traffic deluge. Given that Skype does not currently have systems in place to prevent from such route flapping (or hysteresis effects), this will easily lead to users being switched to other supernodes, which in turn will be brought down by  the traffic deluge.

What could Skype have done to prevent this outage and the future ones?

Network monitoring:

Clearly, Skype needed an analytics and measurement system in place yesterday. This can be as simple as having a monitoring daemon that gathers “anonymized” statistics about CPU, network, and memory utilization on each Skype client and more importantly, at each client that has been elevated to supernode status. Then these statistics should be aggregated hierarchically, in fact by using the same overlay network that Skype uses to route calls (called Global Index). Given that such statistical information is only used for health monitoring, it is possible to save on network bandwidth by aggregating the statistics in a lossy manner, e.g. by using Bloom filters. Finally, the statistics should be aggregated at a central server or database where time-series forecasting techniques can be used to measure whether the aggregate CPU, network or memory utilization of Skype’s infrastructure is normal or above normal. Indeed, such a system if it were in place, would have alerted Skype in advance of the problems to befall their network.

Figure 1: BGP route flapping

Learning from BGP route-dampening:

Skype’s supernode oscillations are evocative of another oscillation issue that networking industry has dealt with in the past, BGP route flapping, route oscillations and route convergence. BGP  or Border Gateway Protocol is Internet’s premier inter-domain routing protocol and when a router decides to prefer one route over another, it should not do so without considering the global implications of its decision. For instance, in Figure 1 below, suppose router R2 advertises to the rest of the Internet that the best way to reach it is via router R3.  Now imagine that during an increased traffic onslaught, the link between R2 and R3 goes down due to heart-beat failures in the TCP channel established between the two routers. At that time, R3 may be tempted to advertise R1 as the best router to reach it (R3), however, that would simply mean that the traffic deluge will be shifted from the link R2-R3 to R1-R3. After some time, it is highly likely that this link R1-R3 also goes down and then R3 switches back to router R2. As you can imagine, this see-saw can keep going ad-infinitum and that’s why techniques like “route flap dampening” were invented. Luckily for Skype, there is a vast literature on route flap dampening and oscillation prevention that they can learn from.

Posted in DDoS, Flash crowds, network monitoring, Uncategorized | Tagged , , , , , , | Leave a comment

Application-layer DDoS claims its latest victims: Wikileaks, Mastercard and Visa.


Application-layer DDoS claims its latest victims: Wikileaks, Mastercard and Visa. What can the security community do to stop application-layer DDoS attacks and the ensuing nuisance for legitimate users of a web site, once and for all?

In the latest episode in the saga of WikiLeaks inspired cyber war, the websites of Mastercard and Visa were brought down by an application-layer DDoS attack. You can find more details on how the attack was launched here and here.

I didn’t find it surprising how easy it was for someone to bring these sites down with less than 1000 user machines who volunteered to be bots. What I did find surprising was that corporations such as Mastercard and Visa do not have sufficient capacity or safeguards in place to protect themselves from such an attack. Even more surprising is that we have know about application-layer DDoS attacks for almost a decade and still we hear about how such and such corporation was brought down. The first incident of application layer DDoS attack was CyberSlam operation when a TiVo vendor DDoS’ed his competitor’s site. More recently, in 2008, CNN.com was brought down by “hacktivists” allegedly from China – all they had to do was to keep their browser open with an iframe that would constantly referesh cnn.com.

Even more surprising is how corporations, enterprise networks and ISPs still don’t have an automated approach to detect DDoS attacks and prevent them automatically. Every time a DDoS attack occurs, we read about how IT folks from that web site had to quickly change their DNS provider or move to a cloud based service such as Amazon, etc. What we really need is an automated approach to handling DDoS.

I wrote my PhD dissertation half-a-decade ago and two ACM papers (INFOCOM’06, ToN’08) on this topic: on how to automatically detect DDoS attackers and then throttle them at a reverse proxy server that sits before the web server tier.

In a nutshell, my thesis talks about the various possible ways by which someone can launch an application-layer DDoS attack. An attacker can either (1) send a lot of requests too fast, or (2)  launch an asymmetric attack by figuring out the heaviest page or largest image file and fetch it many more times than other pages, (3) or a personal favorite and nastiest to detect approach, which I named “guerilla attack”, open a lot of new HTTP/1.1 sessions, send one request per session and disappear without waiting for the response from the server, i.e. keep the session with apache server half-open. Performance tests against a benchmark site showed how much damage such attacks can cause: a site which say normally serves 100 concurrent users at a time while providing them responses within 0.1 seconds can easily see the response times of legitimate users rise to 40 seconds (at which point they will be annoyed and leave the site) with the most potent attack that only involved using 300 clients generating guerilla attacks.

On the detection front, I proposed an approach to assign suspicion scores to each HTTP/1.1 session on the basis of how well behaved it is. So a misbehaving session which is either sending requests faster than normal or generating request distributions that vary from normal user browsing profiles, will be assigned a high suspicion score. Then a reverse proxy server that keeps track of CPU loads and network utilization in the web cluster will schedule sessions inside on the basis of incoming sessions’ suspicion score. So if you have a low suspicion score, your session’s requests will be given higher priority than someone with a high suspicion.

Every time I read about a new application-level DDoS, I am inspired to open source my reverse proxy code. Any of you reading this interested in helping out, just holler.

Posted in DDoS, Uncategorized | Tagged , , , , , | 2 Comments

Media coverage on our ACM paper on Domain-Fast Flux Botnets


Prominent journalists who routinely cover cyber security related news recently wrote about the techniques we developed to detect botnets that employ domain fast-flux. We presented our paper at ACM IMC 2010, in Melbourne, Australia.

1) “New Technique Spots Sneaky Botnets”, By Kelly Jackson Higgins
at DarkReading

2) “Boffins devise early-warning bot spotter: Conficker’s Achilles Heel” , By Dan Goodin, The Register

Happy reading!

 

Posted in botnets | Tagged , , , , , , | 1 Comment

Zero-day Detection of Domain-Flux Botnets


Ever wondered why Botnets such as Conficker would generate domain names that look gibberish, i.e. from a language with no properly matching vowels and consonants? Despite the massive sophistication exhibited by Conficker, it left only one Achilles heel, and I recently helped develop a method to detect Conficker. Our method is “zero-day” in that it should be able to detect even future “domain fluxing” botnets automatically by exploiting that one flaw in their design.

Recently several Botnets such as Conficker, Kraken and Torpig have used DNS based “domain fluxing” for command-and-control, where each Bot queries for existence of a series of domain names and the owner has to register only one such domain name. Unfortunately, in each case, someone had to reverse engineer the bot executable and then determine the sequence of domain-strings that would be generated every day. As one can imagine, this process is time- and resource-intensive and too much valuable time may pass before someone finds out all the domain names that would be registered by a Botnet and races ahead to register them all. We thought hard on this problem. On how can we make a first-alarm generating system of sorts which can take a look at all DNS traffic and provide instantaneous feedback on whether there is a domain flux activity present in it.

After several months of work, we recently presented a paper on our methodology at ACM Internet Measurement Conference on Nov. 1 2010 in Melbourne, Australia, on how to detect such domain flux botnets in real-time, i.e. in a zero-day fashion.

The methodology is based on an interesting observation. Whoever has tried registering for a domain name knows how hard it can be to find a name that is not taken. And that’s the very reason why Botnet developers had no better choice than generating unpronounceable words. For instance, Conficker bots generate names such as joftvvtvmx.org, gcvwknnxz.biz, vddxnvzqjks.ws. Kraken bots generate domains such asbltjhzqp.dyndns.org, ejfjyd.mooo.com, mnkzof.dyndns.org. The difference is striking in that these domain names consist of words that are absolutely unpronounceable. Of course, as Botnet owners have to ensure that whatever domain name they choose to register, it should be easily available, and given that almost all domain names are nowadays taken, they were left with no other choice but to generate these domain names randomly. In most cases, they did not even bother to generate strings with similar distribution of vowels and consonants as normal latin-language words. In fact this forms the hypothesis for our domain flux detection, where we look at distribution of alphabets in a domain name to infer whether it is good or bad.

Title: Detecting Algorithmically Generated Malicious Domain Names
Authors: Sandeep Yadav, Ashwath Reddy, A.L. Narasimha ReddySupranamaya Ranjan
Presented at ACM Internet Measurement Conference 2010
Abstract:
Recent Botnets such as Conficker, Kraken and Torpig have used DNS based “domain fluxing” for command-and-control, where each Bot queries for existence of a series of domain names and the owner has to register only one such domain name. In this paper, we develop a methodology to detect such “domain fluxes” in DNS traffic by looking for patterns inherent to domain names that are generated algorithmically, in contrast to those generated by humans. In particular, we look at distribution of alphanumeric characters as well as bigrams in all domains that are mapped to the same set of IP-addresses. We present and compare the performance of several distance metrics, including KL-distance, Edit distance and Jaccard measure. We train by using a good data set of domains obtained via a crawl of domains mapped to all IPv4 address space and modeling bad data sets based on behaviors seen so far and expected. We also apply our methodology to packet traces collected at a Tier-1 ISP and show we can automatically detect domain fluxing as used by Conficker botnet with minimal false positives.

Posted in botnets | Tagged , , , , , , , | Leave a comment