Tuesday, January 15, 2013

Logging YouTube Titles with Bro


1 It's harder to know what and when to write than to know how to write

It's not uncommon for the hardest part of a Bro script to be the initial idea. Bro is well documented, well organized and logically laid out - aspects that make Bro scripting easier to learn. Unfortunately, it's not easy to figure out just when a new script is warranted. I love using Bro! I love getting into discussions about it and introducing NSM practitioners to it. I love when it dawns on people that Bro is less of an application and more of, as I recently heard Seth Hall refer to it, "a platform". However, at times I find myself having to step back and ask myself "Wait, is this a job for Bro?" Usually, the decision comes down not to "can Bro do this?" but to "will doing this in Bro cause a performance problem?" As I've pointed out the in past, if you try to push Bro out of its connection-oriented safe zone and start using per-packet event handlers such as new_packet, you're going to bring your Bro workers to a stuttering stop in short order!
So, when someone asked that we include LuaJIT capabilities when we build the latest Suricata for SecurityOnion so they could write a script to log Youtube titles, I started poking around Bro and wondering if this was a "good idea". Good idea or not, I realized it was a key learning opportunity, and like my previous posts, I'll try to walk through the life cycle of the script from testing and exploration to workable code.

2 Have tracefile will travel

I like to start with just a tracefile, one event and Bro. If you've followed this blog's posts about Bro, you'll notice that instead of starting with tshark, my work flow has moved to a point such that I start with Bro. I took a moment to generate a tracefile on my laptop while I browsed through a couple of youtube clips. Here, I have to mention a caveat: If you capture traffic on a machine that has TCP checksum offloading (as most major OSes do) Bro will sqwak at you! In fact, if you're running the version of Bro from their git repo there's even a script that will tell you! This script isn't in the current 2.1 release, but should be in 2.2.
bro -r youtube-browse.trace
WARNING: 1357934820.121088 Your trace file likely has invalid IP checksums, most likely from NIC checksum offloading. (/Users/Macphisto/Documents/src/bro/scripts/base/misc/find-checksum-offloading.bro, line 42)
For starters take a moment and marvel that Bro includes a script that tells you when checksum offloading is in use! Okay, enough marveling! Back into the packet mines! To get Bro to parse the pcap w/out complaint, give it the -C flag when you run it on the command line. When we run the packet trace against the the default settings with Bro, we get our common and well loved .log outputs. The for tracefile I'm using, my http.log file runs approximately 175 lines. If we want to strip out some of the chaff since we're only interested in the titles of individual videos, we can employ some bro-cut and awk to search for any URI field that starts with "/watch?v=".
bro-cut -d ts host uri  < http.log | awk '{if ($3 ~ /^\/watch\?v=/) print $0}'  
2013-01-11T15:07:03-0500    www.youtube.com /watch?v=p3Te_a-AGqM
2013-01-11T15:07:17-0500    www.youtube.com /watch?v=5axK-VUKJnk
2013-01-11T15:07:25-0500    www.youtube.com /watch?v=Zxt-c_N82_w
2013-01-11T15:07:29-0500    www.youtube.com /watch?v=Dgcx5blog6s
2013-01-11T15:07:33-0500    www.youtube.com /watch?v=zI4KfUPRU5s
So we know our pcap has the kind of traffic we want to work and we know we're looking at six videos viewed, so our logfile should include six entries. If we were to download each page, we'd be able to pull the title of the video from the HTML title tags in the document's source. We've got input, a desired output, and a decent guess at how to accomplish what we want. Time to start playing with events and seeing if we can get some valid output.
At this point, I start using emacs and bro-mode's bro-event-query to search for keywords in event definitions. You can do the same w/ grep and the events.bif.bro file or by perusing the online documentation at www.bro-ids.org/documentation if you are a member of the unwashed masses who don't adore emacs. I try to pick keywords related to the function of the script I'm working in. Since we are working with the HTTP protocol, the obvious query to try first is simply "http".
global http_proxy_signature_found: event(c: connection);
global http_signature_found: event(c: connection);
global http_stats: event(c: connection, stats: http_stats_rec);
global http_event: event(c: connection, event_type: string, detail: string);
global http_message_done: event(c: connection, is_orig: bool, stat: http_message_stat) &group="http-body";
global http_content_type: event(c: connection, is_orig: bool, ty: string, subty: string) &group="http-body";
global http_entity_data: event(c: connection, is_orig: bool, length: count, data: string) &group="http-body";
global http_end_entity: event(c: connection, is_orig: bool) &group="http-body";
global http_begin_entity: event(c: connection, is_orig: bool) &group="http-body";
global http_all_headers: event(c: connection, is_orig: bool, hlist: mime_header_list) &group="http-header";
global http_header: event(c: connection, is_orig: bool, name: string, value: string) &group="http-header";
global http_reply: event(c: connection, version: string, code: count, reason: string) &group="http-reply";
global http_request: event(c: connection, method: string, original_URI: string, unescaped_URI: string, version: string) &group="http-request";
global gnutella_http_notify: event(c: connection);
Bro has a lot of great http events and we could probably spend an inordinate amount of time simply playing with each event handler, but let's jump right to the most likely suspect and look at what we can get out of http_entity_data. First let's checkout it's inline documentation. Again, here I use bro-mode, feel free to use your method of choice!
## Generated when parsing an HTTP body entity, passing on the data. This event
## can potentially be raised many times for each entity, each time passing a
## chunk of the data of not further defined size.
##
## A common idiom for using this event is to first *reassemble* the data
## at the scripting layer by concatenating it to a successively growing
## string; and only perform further content analysis once the corresponding
## :bro:id:`http_end_entity` event has been raised. Note, however, that doing so
## can be quite expensive for HTTP tranders. At the very least, one should
## impose an upper size limit on how much data is being buffered.
##
## See `Wikipedia <http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol>`__
## for more information about the HTTP protocol.
##
## c: The connection.
##
## is_orig: True if the entity was sent by the originator of the TCP
##          connection.
##
## length: The length of *data*.
##
## data: One chunk of raw entity data.
##
## .. bro:see:: http_all_headers http_begin_entity http_content_type http_end_entity
##    http_event http_header http_message_done http_reply http_request http_stats
##    mime_entity_data http_entity_data_delivery_size skip_http_data
Here's a point where we have to start asking ourselves if what we're doing is reasonable. Anytime you run into a warning in the inline docs, you really do want to take them seriously! They know their stuff, trust their advice! With the File Analysis Framework due out in version 2.2, considerations like this may change but for now, tread carefully. Turns out we can get access to the actual HTTP stream with http_entity_data, but we need to take care that we don't start filling up data structures with the entire stream lest we overload our Bro workers. What we need to do is find the information we want and then stop processing that stream!
Let's play with this event handler and see if it passes muster for what we want. The http_entity_data event handler will break the incoming data into multiple chunks and handle any decoding (i.e. gzipped) of data necessary. The event handler below will print out the the unique identifier of the connection being processed.
event http_entity_data(c: connection, is_orig: bool, length: count, data: string)
  {
  print c$uid;
  }

When run against the pcap I'm using, I get 15,046 lines of output. If we pipe that output through sort | uniq -c | sort -n we get the following.
   1 Hx2s491udkc
   1 OLADCARHdKe
   1 qXn7aoOZIY3
   1 vZF2AuFEO6l
   1 yFNAPFLjO0i
   2 2bXodAWEk0j
   2 DanqmVQzII6
   2 L1NSH9eF6t1
   2 jptSnemNKpl
   3 oqqGY7L2bv3
   4 beBpcNoLnge
   4 sWHlVfnoXRi
   4 ws8K4s9Cmxg
   5 hSl5nnrNA61
   8 R7PLlFkOX7g
   8 cq9sHuip6Qg
  11 Z4Kyigf5Ltk
  14 G46tNkORn89
  17 KYQwK0W7dab
  18 HOGkTeMZBqg
  34 MELk1DePbz4
  35 ZMKcbTWNZQ1
  41 1Gqs5N1xCCj
  42 8rcIgZOIrld
  42 R5qsP8DqfXe
 109 cWKGISIiNW4
 119 X3MHfBQNXIk
 338 solSn9d4peh
 587 xQ63tbCUj92
 942 xeMa2JrSvV8
1171 yGLLPuNeH1l
1639 7bMjnKIFyVj
1639 pIzbIVYHIT
1640 56QrlAd2szc
1640 M3BuzAh4Vya
1640 fC0dBlx8Mc3
3279 NxvKRXnQPf6
There's a rather large number of unique connections in this trace, some of which have just one chunk of data and other which have thousands. Let's see if we can replicate the kind of information we got from our http.log file with bro-cut. The major pieces of information we wanted were the host and the URI; we were, effectively, printing out the workable URL for the video.
event http_entity_data(c: connection, is_orig: bool, length: count, data: string)
   {
   if ( c$http$method == "GET"  && /\.youtube\.com$/ in c$http$host && /^\/watch\?v=/ in c$http$uri )
       {
       print fmt("%s%s", c$http$host, c$http$uri);
       }
   }
The event handler above does nothing but print the host and the uri if three conditions are met. When constructing conditionals with multiple conditions in Bro, as in most programming languages, it's best to construct them such that Bro bails out at the point that is most computationally inexpensive. This process is called "Conditional Short Circuiting". Think of it as whittling down your data in chunks such that each cut is successively more difficult to perform. It's best to know whether the piece will fail early in the process before committing to each difficult cut. In this example, we're checking first for the appropriate HTTP method being used, "GET" in our case. If the conditons are met there we move onto a regular expression(regexp) checking if the words "youtube.com" are in the host field. With this condition, our event will bail out if the data being processed is not from Youtube, making it such that all other sites won't consume any extra memory or process cycles. The third condition uses a regexp again to check that the URI starts with a '/' followed by "watch?v=". Running this script against my tracefile again produces more than 14,000 lines of data, so piping through sort | uniq -c | sort -n we get the following.
Macphisto@Lictor test-bro-youtube % bro -C -r ~/tracefiles/youtube-browse.trace /tmp/iterations_youtube.bro | sort | uniq -c | sort -n
 104 www.youtube.com/watch?v=Zxt-c_N82_w
 107 www.youtube.com/watch?v=zI4KfUPRU5s
 109 www.youtube.com/watch?v=Dgcx5blog6s
 118 www.youtube.com/watch?v=5axK-VUKJnk
 121 www.youtube.com/watch?v=p3Te_a-AGqM
Lacking the time stamp, that is surprisingly close to the output we got from using bro-cut on http.log. We effectively have output of the form "number of chunks of data processed" followed by the "effective youtube URL". If you notice that there are quite a lot of chunks processed for each URL, you're right and it brings up a challenge. We will need to keep some sort of state on these URLs. The simplist way to do so would be to use a global variable. A globally scoped variable is accessible in any part of Bro once it is defined. In this case, we're going to use a table. If you are familiar with other scripting languages, a table in Bro should hold no surprises for you. If tables are new to you, they, in short, associate a value with an index or key.
Tables in Bro are declared with the format below.
SCOPE table_name: table[TYPE] of TYPE;
So, a locally scoped table of ip addresses associated with their hostnames would be declared as:
local ip_to_host: table[addr] of string;
and populated with:
local ip_to_host: table[addr] of string;
ip_to_host[8.8.8.8] = "google-public-dns-a.google.com";
In our script we'll use a globally scoped table indexed by the connections uid to hold the chunk or chunks of data of each connection. To test that our idea will work how we are expecting, we'll run a test script against our tracefile.
global title_table: table[string] of string;

event http_entity_data(c: connection, is_orig: bool, length: count, data: string)
      {
      if ( is_orig )
          {
          return;
          }
      
      if ( /\.youtube\.com$/ in c$http$host && /^\/watch/ in c$http$uri )
          {
          if ( c$uid !in title_table )
              {
              title_table[c$uid] = sub_bytes(data, 0, 15);
              }
          }
      }
      
event bro_done()
    {
    print title_table;
    } 
In the script above, we define our globally scoped table of strings indexed by strings. We then use the http_entity_data event handler to process each chunk of http data. Once the event fires, we check if this chunk was sent by the originator of the TCP connection (i.e. my browser), if so, we bail out of our function. If it's from the server, we use the same set of regular expressions to check that the host is youtube.com and the uri is a valid video. If both of those conditions pass, we check if there is currently an element of our table that is indexed by the unique connection ID we are currently processing. In this case, we have to watch for the absence of c$uid in title_table by using the a negative "in" operatorating like this: "c$uid !in title_table". If we have yet to see any data from this connection ID, we save the the first 15 characters of the stream to the table. If there already exists information for that connection ID, processing of the event completes. When Bro is finished processing, we print the contents of the title_table data structure. As you can see, we receive the proper DOCTYPE tag of the web pages!
{
[LxYAojPggeg] = <!DOCTYPE html>,
[Cct4cQlgsNh] = <!DOCTYPE html>,
[GwEa2HAfAta] = <!DOCTYPE html>
}
We now know our theory works in practice, so let's extend it to check for the html title tag. We should be able to build up a big enough cache of bytes from the HTTP stream in our table to then check for the html title tag for each connection.
global title_table: table[string] of string;

event http_entity_data(c: connection, is_orig: bool, length: count, data: string)
    {
    if ( is_orig )
        {
        return;
        }
            
    if ( /\.youtube\.com$/ in c$http$host && /^\/watch/ in c$http$uri )
        {
        if ( c$uid !in title_table )
            {
            title_table[c$uid] = data;
            }
        else if ( |title_table[c$uid]| < 2000 )
                {
                title_table[c$uid] = cat(title_table[c$uid], data);
                }
            }
        }


event bro_done()
    {

    for (i in title_table)
        {
        if ( /\<title\>/ in title_table[i] )
            {
            local temp: table[count] of string;
            temp = split(title_table[i], /\<\/?title\>/);
            if ( 2 in temp )
                {
                print temp[2];
                }
            }
        }
    } 
In the script above, we do much of the same as the previous script but we're adding in some logic to make sure we don't over tax our Bro workers. Once we check if there's already a chunk of data indexed by the current unique connection ID we also check the byte length of that data using the length operator of surrounding pipes(|). If the byte length of that data is less than 2000 bytes, we concatenate the current data chunk with the data already in the table. In my entirely non-scientific study of Youtube streams, I've found the HTML title tag to be prior to 2000 bytes. Once Bro is finished processing, we then use the bro_quit() event and process the title_table table.
When given a table, a for loop will return the indexes of the table in the temporary varaible supplied in a sequential manner. So in this example, we are iterating over the title_table and storing each index, in turn, in the variable 'i'. Once inside the for loop, we check if there is an HTML title tag in title_table[i] and if there is, we start to use the split function. The split function operates on a string and a regular expression and returns a table of strings indexed by an unsigned integer. When split finds the regular expression, it places everything before in the index of 1 and everything after it in the index of 2, incrementing and repeating the process for each hit on the regular expression. As such, we split on the opening or closing <table> tag in title_table[i] and store the resulting table in temp.
Running the script against the tracefile I'm using, I get the following output.
Macphisto@Lictor /tmp % bro -C -r ~/tracefiles/youtube-browse.trace ~/Documents/Writing/Blog/Logging_Youtube_With_Bro/test_youtube_v1.bro
Extending Emacs Rocks! Episode 01 - YouTube
Emacs Rocks! Live at WebRebels - YouTube
Extending Emacs Rocks! Episode 04 - YouTube 
Those are the titles of the videos I was browsing. Yes, I watch videos about Emacs and so should you! Magnars from Emacs Rocks is brilliant! But there's a problem. If you remember the output from bro-cut there were more GET requests, five to be exact. So what's happening here? Well, it comes down to how the HTTP Protocol works. An HTTP connection doesn't contain just one GET/POST/etc and a reply. It can, in fact, contain many. When I was browsing while generating my tracefile, I wasn't watching the entire videos (I've watched them many times!) then opening a new one, I would let it play for a while then click on one of the suggested Emacs Rocks videos. I might have even opened a couple more in other browser tabs. So, one of the sessions has multiple GET requests in it. If I rerun bro-cut and include the uid, I get the following output from awk.
Macphisto@Lictor /tmp % bro-cut -d ts uid host uri  < http.log | awk '{if ($4 ~ /^\/watch\?v=/) print $0}'
2013-01-11T15:07:03-0500    XuUszZPoVtl www.youtube.com /watch?v=p3Te_a-AGqM
2013-01-11T15:07:17-0500    cT4R1CynIka www.youtube.com /watch?v=5axK-VUKJnk
2013-01-11T15:07:25-0500    XuUszZPoVtl www.youtube.com /watch?v=Zxt-c_N82_w
2013-01-11T15:07:29-0500    XuUszZPoVtl www.youtube.com /watch?v=Dgcx5blog6s
2013-01-11T15:07:33-0500    rX2DqKrjQCi www.youtube.com /watch?v=zI4KfUPRU5s 
There you have it. One connection, XuUszZPoVtl, issued three GET requests. This presents a significant problem. The idea was that we would only inspect the first 2000 bytes of our stream and then bail out so as to not overload our workers. If we can't guarantee that the HTML title tag is not within the first 2000 with our current setup we're going to have to monitor the entire stream and that could add extraneous load to our Bro workers. So, back to the drawing board. We had a good idea, it just needs some… finesse!
Since we know that Bro detects multiple GET's we can try to use that as a toggle for our extraction of the HTML title tag. In fact, we're even going to change the data structure we used to keep state for our script. In testing, I'm almost certain that the HTML title tag is going to be in the first chunk of data returned after a GET request, so there's no need to store the data and keep concatenating it. Instead we'll use a set to store the unique IDs. A set in Bro is a list of unique entities. The declaration of a set is similar to how we defined the table in our previous example.
In this case we'll use a set of strings, which we'll declare with:
global title_set: set[string];
Elements of a set are managed through the use of the add and delete keywords. In our new script, we'll keep an eye out for a GET request meeting the requirements of our youtube links and then add that unique connection ID to our set. We'll then let http_entity_data check for the existence of that connection ID, pull our title from the first chunk of data, and then delete the entity from our globally scoped set. This way, if there are more than GET requests in an HTTP stream, our parsing of that data will be toggled on and off at the appopriate times, freeing us from having to process any more of the HTTP stream than is necessary.
global title_set: set[string];

event http_reply(c: connection, version: string, code: count, reason: string)
    {
    if ( c$http$method == "GET" && /\.youtube\.com$/ in c$http$host && /^\/watch\?v=/ in c$http$uri )
        {
        add title_set[c$uid];
        }
    }
    

event http_entity_data(c: connection, is_orig: bool, length: count, data: string)
    {
    if ( is_orig )
        {
        return;
        }

    if ( c$uid in title_set )
        {
                
        if ( /\<title\>/ in data && /\<\/title\>/ in data )
            {
            local temp: table[count] of string;
            if ( 2 in temp )
                {
                print fmt("%s - %s %s: %s", c$http$method, c$http$host, c$http$uri, temp[1]);
                }
            delete title_set[c$uid];
            }
        }
    }
The new script uses the same set of splits and prints the output if it finds the opening and closing HTML title tags. Running this script against the test packet trace produces the output we would expect.
Macphisto@Lictor /tmp % bro -C -r ~/tracefiles/youtube-browse.trace ~/Documents/Writing/Blog/Logging_Youtube_With_Bro/test_youtube_v2.bro
GET - www.youtube.com /watch?v=p3Te_a-AGqM: Emacs Rocks! Live at WebRebels - YouTube
GET - www.youtube.com /watch?v=5axK-VUKJnk: Extending Emacs Rocks! Episode 01 - YouTube
GET - www.youtube.com /watch?v=Zxt-c_N82_w: Extending Emacs Rocks! Episode 02 - YouTube
GET - www.youtube.com /watch?v=Dgcx5blog6s: Extending Emacs Rocks! Episode 03 - YouTube
GET - www.youtube.com /watch?v=zI4KfUPRU5s: Extending Emacs Rocks! Episode 04 - YouTube
Output is nice, but Bro wouldn't be Bro if it weren't for logs and in its current state, this script isn't deployable. The logs must flow and to do so, we need the logging framework and to use the logging framework there is some scaffolding we need to add to our script. For starters, we should give our script a namespace so as to play well with the community, such as simply "YouTube", to do this, at the top of our script we just add "module YouTube;". We'll also need to export some information from our namespace to make it available outside of the namespace, namely we need to add a value to the Log::ID enumerable and add a YouTube::Info record data type.
export {
    # The fully resolved name dor this will be YouTube::LOG
    redef enum Log::ID += { LOG };

    type Info: record {
        ts:    time    &log;
        uid:   string  &log;
        id:    conn_id &log;
        host:  string  &log;
        uri:   string  &log;
        title: string  &log;
        };
}
Adding YouTube::LOG to the Log::ID enumerable is pretty much just boilerplate code. You'll see "redef enum Log::ID += { LOG };" in just about every single script that produces a log. The YouTube::Info record defines information we want to log. Any entry in this data type with the &log attribute is written to the log file when Log::write() is called. Now, instead of printing our information to stdout, call Log::write() with the appropriate record and the Logging framework takes care of the rest.
Our final script is below.
module YouTube;

export {
    # The fully resolve name for this will be YouTube::LOG    
    redef enum Log::ID += { LOG };

    type Info: record {
        ts:    time    &log;
        uid:   string  &log;
        id:    conn_id &log;
        host:  string  &log;
        uri:   string  &log;
        title: string  &log;
        };
}

global title_set: set[string];

event bro_init() &priority=5
    {
    Log::create_stream(YouTube::LOG, [$columns=Info]);
    }

event http_reply(c: connection, version: string, code: count, reason: string)
    {
    if ( c$http$method == "GET" && /\.youtube\.com$/ in c$http$host && /^\/watch\?v=/ in c$http$uri )
        {
        add title_set[c$uid];
        }
    }

event http_entity_data(c: connection, is_orig: bool, length: count, data: string)
    {
    if ( is_orig )
        {
        return;
        }

    if ( c$uid in title_set )
        {
        if ( /\<title\>/ in data && /\<\/title\>/ in data )
            {
            local temp: table[count] of string;
            temp = split(data, /\<\/?title\>/);
            if ( 2 in temp )
                {
                local log_rec: YouTube::Info = [$ts=network_time(), $uid=c$uid, $id=c$id, $host=c$http$host, $uri=c$http$uri, $title=temp[2]];
                Log::write(YouTube::LOG, log_rec);
                delete title_set[c$uid];
                }
            }
        }
    }
Feel free to pull down the different versions of this script we've worked through from my broselytize github repository, generate a tracefile of some youtube traffic, and tinker to your hearts delight!

-->
Date: 2013-01-15 16:06:50 EST
Author: Scott Runnels
Org version 7.8.11 with Emacs version 24
Validate XHTML 1.0

Friday, May 4, 2012

Learning the Bro Scripting Language Part 3 :: Practical Uses

Learning the Bro Scripting Language :: Practical Uses Part 1

1 The short road to practicality

In the previous two blog posts, I covered some basic uses of the Bro scripting language by using it to solve large parts of a network forensics challenge. If you've read those previous posts, you'll remember that Bro scripting is an event driven language, meaning that Bro generates events based on the network traffic it observes and the scripting language can be used to apply logical processing to those events. As we saw in the previous post, Bro generates a ton of events and wading through those events to find the appropriate one is a trial that you can overcome through experience and maybe some help from grep!
Much of what I covered in the first two posts didn't have much practical application as it was intended to illustrate what you can do with Bro scripting. We were working off a trace file and a set of questions and we worked to generate a report for those questions. If there's one thing I've learned from talking to Seth Hall (One of the Bro developers and @remor on Twitter) it's that a solution has to be deployable across an enterprise to be worth your time. If your detection method includes "Open up wireshark and load the pcap" that is detection after the fact and you should find a way to make that action automated. Wireshark has its place and it does the job it was designed for very well, however, it can't be deployed across a large scale production environment like Bro can. As I've done in the past posts, I still start with tshark to help me identify the behavior I'm interested with and then pivot to Bro scripts to deploy it widescale. With Bro, we want to leverage the scripting language to be able to define activities of interest and report on them.

2 Detecting web sites that use basic auth

There are still a good number of sites using Basic Access Authentication and it's likely not something you'd like to see running as a service on your network. If you've not toyed with Basic Access Authentication, it's effectively just a way to make sure that non-HTTP-compatible characters can be transmitted by using Base64 encoding. When a site uses basic auth, it sends the username and password as a colon separated string that has been Base64 encoded. While confidentiality of the username and password are not the primary intent of basic auth, I wouldn't be surprised to find a number of web developers who consider it 'secure enough' because they don't think someone is listening. To illustrate the point about Base64, here are two short snippets of ruby that will encrypt a username and password in the same way basic auth does and a short script to decrypt the Base64 string.
require 'base64'
username = 'srunnels'
password = 'recursivehoff'
p Base64.encode64("#{username}:#{password}")
require 'base64'
base64_string = 'c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg=='
p Base64.decode64(base64_string)

3 How it looks on the line

Since I didn't particularly want to futz with someone else's web server, I stood up a VM with Apache2 and applied Basic Auth to the default site. This way I'll have something I can regularly make requests to as well as something I can use to generate full trace file without possible exposure. With the server up and trace file being generated, we were able to capture some traffic to the site. As usual, I tend to use tshark as my initial tool, so let's find out what we're actually trying to detect.
mac@lubuntu-VM:~$ tshark -r tracefiles/20120503120402.lpc -R "http contains Auth" -O http -V | awk '/Authorization: Basic/ {print}'
    Authorization: Basic c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg==\r\n
    Authorization: Basic c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg==\r\n
    Authorization: Basic c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg==\r\n
    Authorization: Basic c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg==\r\n
The Base64 encoded string is sent to the server as part of the HTTP header, which means to start, we're going to take a look for any Bro events that correspond to http and http headers. Some easy grepping through the base scripts from Bro leads us to two events http_header() and http_all_headers(), the difference between the two being that http_header generates an event for every header while http_all_headers will generate a list of headers per request or response. For now we're going to work with http_header and see if it can detect a session using basic auth.
event http_header(c: connection, is_orig: bool, name: string, value: string)
      {
      if (/AUTHORIZATION/ in name && /Basic/ in value)
         print fmt("%s: %s", name, value);
      }
AUTHORIZATION: Basic c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg==
AUTHORIZATION: Basic c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg==
AUTHORIZATION: Basic c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg==
AUTHORIZATION: Basic c3J1bm5lbHM6cmVjdXJzaXZlaG9mZg==
Three lines of Bro's scripting language and we can detect a server using Basic Access Authentication! Now, all that's left is to make Bro understand that we care about this kind of behavior!

4 Generating notices

Bro's Notice Framework is expansive to say the least. I found that, again, the best resource is the set of scripts that ship with Bro by default. Running a recursive grep for 'notice' in \/usr\/local\/bro\/share\/ returned 490 lines and taking a look through them, a couple entries stood out as being of possible importance. One of the common entries was "redef enum Notice::Type += {". If you're unfamiliar with the += operator, it's an operator that allows us to add onto an already defined variable. In this case we're adding a value to the enumerable constant Notice::Type. The documentation for Notice::Type lets us know that "Scripts creating new notices need to redef this enum to add their own specific notice types which would then get used when they call the NOTICE function." So, in our case we might enumerate this constant to include "HTTP::Basic_Auth_Server".
module HTTP;
export {
  redef enum Notice::Type += {
    ## Generated if a site is detected using Basic Access Authentication
    HTTP::Basic_Auth_Server
  };
}

The other entry that stood out from poking around the default scripts was NOTICE(). NOTICE() takes one argument, the Notice::Info record, but it's a whopper. You can pass a rather massive amount of information into NOTICE() via the Notice::Info record but the only required argument to pass in is the Notice::Type. If we just wanted to generate a notice, albeit a somewhat unhelpful one, we could pass it just the Notice::Type we added.
module HTTP;

export {
  redef enum Notice::Type += {
    ## Generated if a site is detected using Basic Access Authentication
    HTTP::Basic_Auth_Server
  };
}


event http_header(c: connection, is_orig: bool, name: string, value: string)
      {
      if (/AUTHORIZATION/ in name && /Basic/ in value)
         {
         NOTICE([$note=HTTP::Basic_Auth_Server]);
         }
      }
When we run the script against the tracefile, we get a notice.log in the current working directory.
#separator \x09
#set_separator  ,
#empty_field    (empty)
#unset_field    -
#path   notice
#fields ts      uid     id.orig_h       id.orig_p       id.resp_h       id.resp_p       proto   note    msg     sub     src     dst     p       n       peer_descr      actions policy_items    suppress_for    dropped remote_location.countr
#types  time    string  addr    port    addr    port    enum    enum    string  string  addr    addr    port    count   string  table[enum]     table[count]    interval        bool    string  string  string  double  double  addr    string
1336061141.701690       -       -       -       -       -       -       HTTP::Basic_Auth_Server        -       -       -       -       -       -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -
1336061141.914860       -       -       -       -       -       -       HTTP::Basic_Auth_Server        -       -       -       -       -       -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -
1336061141.918352       -       -       -       -       -       -       HTTP::Basic_Auth_Server        -       -       -       -       -       -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -
1336061147.472010       -       -       -       -       -       -       HTTP::Basic_Auth_Server        -       -       -       -       -       -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -
Like I said, a notice albeit, an uninformative one! Let's take a look at what happens when we give NOTICE() a Notice::Type and a connection.
module HTTP;

export {
  redef enum Notice::Type += {
    ## Generated if a site is detected using Basic Access Authentication
    HTTP::Basic_Auth_Server 
  };
}

event http_header(c: connection, is_orig: bool, name: string, value: string)
      {
      if (/AUTHORIZATION/ in name && /Basic/ in value)
         {
         NOTICE([$note=HTTP::Basic_Auth_Server,
                 $conn=c
               ]);
         }
      }
#separator \x09
#set_separator  ,
#empty_field    (empty)
#unset_field    -
#path   notice
#fields ts      uid     id.orig_h       id.orig_p       id.resp_h       id.resp_p       proto   note    msg     sub     src     dst     p       n       peer_descr      actions policy_items    suppress_for    dropped remote_location.countr
#types  time    string  addr    port    addr    port    enum    enum    string  string  addr    addr    port    count   string  table[enum]     table[count]    interval        bool    string  string  string  double  double  addr    string
1336061141.701690       j931OBJ1895     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        -       -       192.168.164.198 192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000
1336061141.914860       j931OBJ1895     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        -       -       192.168.164.198 192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000
1336061141.918352       j931OBJ1895     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        -       -       192.168.164.198 192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000
1336061147.472010       zKNVZaX8uS7     192.168.164.198 51845   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        -       -       192.168.164.198 192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000
(END)
Simply adding a connection as an argument allowed Bro's Notice Framework to fill in the the uid, originator's host and port, and the responder's host and port. Now we have a notice that is of actual use! Can we make it better? I think so. There's something interesting to note in the connection that gets passed into http_header().
[id=[orig_h=192.168.164.198, orig_p=51845/tcp, resp_h=192.168.164.185, resp_p=80/tcp], orig=[size=359, state=1, num_pkts=2, num_bytes_ip=112], resp=[size=0, state=0, num_pkts=0, num_bytes_ip=0], start_time=1336061147.471671, duration=0.000339, service={^J^IHTTP^J}, addl=, hot=0, history=ScAD, uid=MBPKpZ7z43e, dpd=<uninitialized>, conn=<uninitialized>, extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=<uninitialized>, ftp=<uninitialized>, http=[ts=1336061147.47201, uid=MBPKpZ7z43e, id=[orig_h=192.168.164.198, orig_p=51845/tcp, resp_h=192.168.164.185, resp_p=80/tcp], trans_depth=1, method=GET, host=192.168.164.185, uri=/test.html, referrer=<uninitialized>, user_agent=Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0.2) Gecko/20100101 Firefox/10.0.2, request_body_len=0, response_body_len=0, status_code=<uninitialized>, status_msg=<uninitialized>, info_code=<uninitialized>, info_msg=<uninitialized>, filename=<uninitialized>, tags={^J^J}, username=srunnels, password=<uninitialized>, capture_password=F, proxied=<uninitialized>, mime_type=<uninitialized>, first_chunk=T, md5=<uninitialized>, calc_md5=F, calculating_md5=F, extraction_file=<uninitialized>, extract_file=F], http_state=[pending={^J^I[1] = [ts=1336061147.47201, uid=MBPKpZ7z43e, id=[orig_h=192.168.164.198, orig_p=51845/tcp, resp_h=192.168.164.185, resp_p=80/tcp], trans_depth=1, method=GET, host=192.168.164.185, uri=/test.html, referrer=<uninitialized>, user_agent=Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0.2) Gecko/20100101 Firefox/10.0.2, request_body_len=0, response_body_len=0, status_code=<uninitialized>, status_msg=<uninitialized>, info_code=<uninitialized>, info_msg=<uninitialized>, filename=<uninitialized>, tags={^J^J^I}, username=srunnels, password=<uninitialized>, capture_password=F, proxied=<uninitialized>, mime_type=<uninitialized>, first_chunk=T, md5=<uninitialized>, calc_md5=F, calculating_md5=F, extraction_file=<uninitialized>, extract_file=F]^J}, current_request=1, current_response=0], irc=<uninitialized>, smtp=<uninitialized>, smtp_state=<uninitialized>, ssh=<uninitialized>, ssl=<uninitialized>, syslog=<uninitialized>]
It's hard to see through the massive amount of information that Bro includes, but if you look you'll notice three fields are filled in that are directly related to what we're working on.
username=srunnels, password=<uninitialized>, capture_password=F,
Bro has not only already detected that username and password was passed across the line but it already decrypted it! The capture_password field is set to False by default in Bro, but it clearly got our username so Bro must have a way of decrypting base64. To the grep-mobile!
1092 ## Decodes a Base64-encoded string.
1093 ##
1094 ## s: The Base64-encoded string.
1095 ##
1096 ## Returns: The decoded version of *s*.
1097 ##
1098 ## .. bro:see:: decode_base64_custom
1099 global decode_base64: function(s: string): string;
Some day, I'll stop being shocked by everything Bro does and just accept that it's wall-to-wall awesome. Well, if Bro can do it, let's make our notice include it as well!
module HTTP;

export {
  redef enum Notice::Type += {
    ## Generated if a site is detected using Basic Access Authentication
    HTTP::Basic_Auth_Server  
  };
}

event http_header(c: connection, is_orig: bool, name: string, value: string) 
  {
  if (/AUTHORIZATION/ in name && /Basic/ in value)
    {
    local parts: string_array;
    
    parts = split1(decode_base64(sub_bytes(value, 7, |value|)), /:/);
    
    NOTICE([$note=HTTP::Basic_Auth_Server,
            $msg=fmt("username: %s password: %s", parts[1], 
            HTTP::default_capture_password == F ? "Blocked" : parts[2]),
            $conn=c
            ]);
    }   
  }
Here, I've taken the value we find by checking for Authorization Basic in the header and break it into pieces using sub_bytes() and split1(). The function sub_bytes() will take a value, a position, and a length and return a string while split1() will take a string and a regexp and output a string_array whose members are the parts of the string once they've been split one time. In this case, I've used sub_bytes() to remove the "Basic " part of the string in value, then used split1() to break the resulting string on the colon. You'll also notice that I included a ternary operator as part of the fmt() operator. I'm of the opinion that if the Bro developers did it, we should follow suit and in this case, we're only going to include the password if HTTP::default_capture_password is true. Once we run the script against our trace file, we get the output we expected in notice.log.
1336061141.701690       MzLXbKyksE9     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        username: srunnels password: Blocked    -       192.168.164.198 192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -       -       -       -       -
1336061141.914860       MzLXbKyksE9     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        username: srunnels password: Blocked    -       192.168.164.198 192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -       -       -       -       -
1336061141.918352       MzLXbKyksE9     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        username: srunnels password: Blocked    -       192.168.164.198 192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -       -       -       -       -
1336061147.472010       sO6OVrkv5G6     192.168.164.198 51845   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        username: srunnels password: Blocked    -       192.168.164.198 192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -       -       -       -       -
If we include a command line directive to set HTTP::default_capture_password the decrypted password will be included in the notice.log file.
/usr/local/bro/bin/bro -r ~/tracefiles/20120503120402.lpc post3.bro  "HTTP::default_capture_password = T;"
1336061141.701690       7VFRK2sOUl4     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        username: srunnels password: recursivehoff      -       192.168.164.198     192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -       -       -       -       -
1336061141.914860       7VFRK2sOUl4     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        username: srunnels password: recursivehoff      -       192.168.164.198     192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -       -       -       -       -
1336061141.918352       7VFRK2sOUl4     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        username: srunnels password: recursivehoff      -       192.168.164.198     192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -       -       -       -       -
1336061147.472010       JDdZhg1kTol     192.168.164.198 51845   192.168.164.185 80      tcp     HTTP::Basic_Auth_Server        username: srunnels password: recursivehoff      -       192.168.164.198     192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       3600.000000     F       -       -       -       -       -       -       -       -

5 Making our script more operationaly relevant

At the moment, our Bro script generates a notice every time it sees a header with the "Authorization Basic" set. For a long HTTP session, this could get ugly in the notice.log file. As well, an Incident Responder is likely to care about how many times a request was sent, only that a server using basic auth was contacted. The Notice::Info record sent to NOTICE() can include a $identifier field that is used by the notice framework to detect when a duplicate notice has been created. For our identifier well use the responder's IP address and port to create the identifier. We're also going to set the $suppress_for argument to indicate how long the alert should be suppressed which will give us a HTTP::Basic_Auth_Server notice once per server per day.
module HTTP;

export {
  redef enum Notice::Type += {
    ## Generated if a site is detected using Basic Access Authentication
    HTTP::Basic_Auth_Server
  };
}

redef Notice::Policy += {
  [$pred(n: Notice::Info) =
    {
    return n$note == HTTP::Basic_Auth_Server && Site::is_local_addr(n$id$resp_h);
    },
  $action = Notice::ACTION_EMAIL
  ]
};
  

event http_header(c: connection, is_orig: bool, name: string, value: string)
  {
  if (/AUTHORIZATION/ in name && /Basic/ in value)
    {
    local parts: string_array;
    
    parts = split1(decode_base64(sub_bytes(value, 7, |value|)), /:/);
    
    if (|parts| == 2)
     NOTICE([$note=HTTP::Basic_Auth_Server,
             $msg=fmt("username: %s", parts[1]), 
             $identifier=cat(c$id$resp_h, c$id$resp_p),
             $suppress_for=1day,
             $conn=c
             ]);
    }   
  }
Now, when we run the script against the tracefile we only get one notice in notice.log.
1336061141.701690       ZI9LtsaV4Qh     192.168.164.198 51844   192.168.164.185 80      tcp     HTTP::Basic_Auth        username: srunnels password: recursivehoff      -       192.168.164.198     192.168.164.185 80      -       bro     Notice::ACTION_LOG      6       86400.000000    F       -       -       -       -       -       -       -       -
Another utility we might like to add is to detect when the server using basic auth is one of our servers. To tell Bro to send an email when a notice is generated, we need to write a Notice::policy item that includes an action ($action) and an anonymous function used to determine if sending the email is appropriate ($pred). The appropriate $action would be Notice::ACTION_EMAIL and the predicate depends on the situation and how you want to restrict it. For detecting sites using basic auth, logging a notice to file is fine for when a local system access a server using basic auth, but it could be pretty handy to receive an email if the server using basic auth turns out to be local. So for our script we'd use a Notice::policy like:
redef Notice::policy += {
  [$pred(n: Notice::Info) =
    {
    return n$note == HTTP::Basic_Auth_Server && Site::is_local_addr(n$id$resp_h);
    }
   $action = Notice::ACTION_EMAIL
  ]
};
Here, we've created an anonymous function that returns true or false depending on if our note is HTTP::Basic_Auth_Server and the host of the responder is determined to be local. If the $pred returns True, the $action is taken resulting in an email alert being sent.

6 Wrapping up

Bro is surprisingly complex. There is so much going on beneath the hood, that I'm not sure I'll ever fully understand it - especially given how quickly the dev's move. That being said, I believe the dev's have done a superb job making sure the users have the tools they need at hand. Making heavy use of the online documentation and simply searching through the default scripts shipped with Bro can bring forth a massive learning opportunity! The fact that you can go from evidence to writing scripts that generate notices in a short period of time let's you make "closing the loop" part of your regular incident response cycle.

7 Repository

If you're interested in running the script built in this blog post they are posted in a github repository. To include this in your current build of Bro, follow the commands below.
cd /usr/local/bro/share/bro/site
git clone git://github.com/srunnels/bro-scripts srunnels-scripts
echo "@load srunnels-scripts/http-basic-auth" >> local.bro
# If you want to receieve emails about local basic auth notices
echo "@load srunnels-scripts/notice-handling" >> local.bro
Then issue "install" and "restart" from within broctl.


Date: 2012/05/03
Org version 7.8.06 with Emacs version 24
Validate XHTML 1.0

Friday, April 27, 2012

Solving Network Forensics Challenges with Bro :: Part 2

Solving Network Forensic Challenges with Bro :: Part 2


1 Expanding on Part 1

In part 1, we tried to answer as many questions as we could with trial and error. In part 2 we're going to take a look at a couple of useful events and dig a little more into what Bro is capable of in scriptland. If you haven't had a chance to read Rob Lee's SANS Network Forensics Challenge writeup, take a moment to familiarize yourself with it.
Again, using the existing bro scripts is going to be our best bet for not only learning new things but for learning how to do them in a way that is consistent with how Bro works and the vision the devs have for it. It might be useful to take a look at what the most commonly used events might be.
mac@securityonion-Analyst:/usr/local/share/bro$ grep -hri "^\W*event" * | grep -v "bro_init\|bro_done" | sort | uniq -c | sort -rn | head -10
   24 event connection_state_remove(c: connection)
    9 event connection_established(c: connection)
    8 event connection_state_remove(c: connection) &priority=-5
    7 event http_request(c: connection, method: string, original_URI: string,
    7 event file_transferred(c: connection, prefix: string, descr: string,
    7 event connection_finished(c: connection)
    6 event protocol_violation(c: connection, atype: count, aid: count,
    5 event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string,
    5 event remote_connection_closed(p: event_peer)
    5 event new_connection(c: connection)
Top of the list is the event connection_state_remove() and if it's that high it has to have some serious power behind it! Let's take a quick look at its documentation.
314 ## Generated when a connection's internal state is about to be removed from
315 ## memory. Bro generates this event reliably once for every connection when it
316 ## is about to delete the internal state. As such, the event is well-suited for
317 ## scrip-level cleanup that needs to be performed for every connection.  The
318 ## ``connection_state_remove`` event is generated not only for TCP sessions but
319 ## also for UDP and ICMP flows.
320 ##
321 ## c: The connection.
322 ##
323 ## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt
324 ##    connection_established connection_external connection_finished
325 ##    connection_first_ACK connection_half_finished connection_partial_close
326 ##    connection_pending connection_rejected connection_reset connection_reused
327 ##    connection_status_update connection_timeout expected_connection_seen
328 ##    new_connection new_connection_contents partial_connection udp_inactivity_timeout
329 ##    tcp_inactivity_timeout icmp_inactivity_timeout conn_stats
330 global connection_state_remove: event(c: connection);
Well, we can see why connection_state_remove() gets so much use! Right before Bro decides to stop caring about this connection, it generates this event. Let's take a look at the using this event against the tracefile provided by SANS.
event connection_state_remove(c: connection)
  {
  print c;
  }     
[id=[orig_h=10.10.10.70, orig_p=1037/tcp, resp_h=10.10.10.10, resp_p=4445/tcp], orig=[size=0, state=1, num_pkts=1, num_bytes_ip=48], resp=[size=0, state=6, num_pkts=1, num_bytes_ip=40], start_time=1272498035.258314, duration=0.000076, service={

}, addl=, hot=0, history=Sr, uid=McumIfcvNF1, dpd=<uninitialized>, conn=[ts=1272498035.258314, uid=McumIfcvNF1, id=[orig_h=10.10.10.70, orig_p=1037/tcp, resp_h=10.10.10.10, resp_p=4445/tcp], proto=tcp, service=<uninitialized>, duration=0.000076, orig_bytes=0, resp_bytes=0, conn_state=REJ, local_orig=<uninitialized>, missed_bytes=0, history=Sr, orig_pkts=1, orig_ip_bytes=48, resp_pkts=1, resp_ip_bytes=40], extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=<uninitialized>, ftp=<uninitialized>, http=<uninitialized>, http_state=<uninitialized>, irc=<uninitialized>, smtp=<uninitialized>, smtp_state=<uninitialized>, ssh=<uninitialized>, ssl=<uninitialized>, syslog=<uninitialized>]
[id=[orig_h=10.10.10.70, orig_p=1037/tcp, resp_h=10.10.10.10, resp_p=4445/tcp], orig=[size=0, state=1, num_pkts=1, num_bytes_ip=48], resp=[size=0, state=6, num_pkts=1, num_bytes_ip=40], start_time=1272498035.594943, duration=0.000037, service={

}, addl=, hot=0, history=Sr, uid=UX0bcKwPrg4, dpd=<uninitialized>, conn=[ts=1272498035.594943, uid=UX0bcKwPrg4, id=[orig_h=10.10.10.70, orig_p=1037/tcp, resp_h=10.10.10.10, resp_p=4445/tcp], proto=tcp, service=<uninitialized>, duration=0.000037, orig_bytes=0, resp_bytes=0, conn_state=REJ, local_orig=<uninitialized>, missed_bytes=0, history=Sr, orig_pkts=1, orig_ip_bytes=48, resp_pkts=1, resp_ip_bytes=40], extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=<uninitialized>, ftp=<uninitialized>, http=<uninitialized>, http_state=<uninitialized>, irc=<uninitialized>, smtp=<uninitialized>, smtp_state=<uninitialized>, ssh=<uninitialized>, ssl=<uninitialized>, syslog=<uninitialized>]
[id=[orig_h=10.10.10.70, orig_p=1037/tcp, resp_h=10.10.10.10, resp_p=4445/tcp], orig=[size=0, state=1, num_pkts=1, num_bytes_ip=48], resp=[size=0, state=6, num_pkts=1, num_bytes_ip=40], start_time=1272498036.141827, duration=0.000045, service={

}, addl=, hot=0, history=Sr, uid=EKKnAEoqQO, dpd=<uninitialized>, conn=[ts=1272498036.141827, uid=EKKnAEoqQO, id=[orig_h=10.10.10.70, orig_p=1037/tcp, resp_h=10.10.10.10, resp_p=4445/tcp], proto=tcp, service=<uninitialized>, duration=0.000045, orig_bytes=0, resp_bytes=0, conn_state=REJ, local_orig=<uninitialized>, missed_bytes=0, history=Sr, orig_pkts=1, orig_ip_bytes=48, resp_pkts=1, resp_ip_bytes=40], extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=<uninitialized>, ftp=<uninitialized>, http=<uninitialized>, http_state=<uninitialized>, irc=<uninitialized>, smtp=<uninitialized>, smtp_state=<uninitialized>, ssh=<uninitialized>, ssl=<uninitialized>, syslog=<uninitialized>]
...snip..
Something you'll notice is that, unlike new_connection(), connection_state_remove() will result in events in a different order. When using new_connection(), your script will generate events somewhat linearly with what is in the tracefile. If you use connection_state_remove() you'll see events generated after then connection has ended.
Looking at the output of the connection_state_remove() we can see that pertinent fields such as history, duration, and conn_state have been filled out for us. Using these values, we can start answering more questions from the SANS Forensic Challenge.
To get the answer to question #4, we can use a simple if statement to check for a responder port of 4444/tcp and then print the c$start_time and sum of c$start_time and c$duration.
event connection_state_remove(c: connection)
  {
  if (c$id$resp_p == 4444/tcp)
    {   
    print fmt("%s", strftime("%Y/%m/%d %H:%M:%S", c$start_time));
    print fmt("%s", strftime("%Y/%m/%d %H:%M:%S", c$start_time + c$duration));
    }   
  }
2010/04/28 19:40:00
2010/04/28 19:41:26
To start answering some of the other questions, we need to start looking at the state of a connection. Bro uses two shorthand fields that are not only handy in scriptland but also useful to understand while you're looking at logs: the history and conn_state The documentation for the two fields are below.
       ## ==========   ===============================================
40     ## conn_state   Meaning
41     ## ==========   ===============================================
42     ## S0           Connection attempt seen, no reply.
43     ## S1           Connection established, not terminated.
44     ## SF           Normal establishment and termination. Note that this is the same symbol as for state S1. 
45     ## REJ          Connection attempt rejected.
46     ## S2           Connection established and close attempt by originator seen (but no reply from responder).
47     ## S3           Connection established and close attempt by responder seen (but no reply from originator).
48     ## RSTO         Connection established, originator aborted (sent a RST).
49     ## RSTR         Established, responder aborted.
50     ## RSTOS0       Originator sent a SYN followed by a RST, we never saw a SYN-ACK from the responder.
51     ## RSTRH        Responder sent a SYN ACK followed by a RST, we never saw a SYN from the (purported) originator.
52     ## SH           Originator sent a SYN followed by a FIN, we never saw a SYN ACK from the responder (hence the connection was "half" open).
53     ## SHR          Responder sent a SYN ACK followed by a FIN, we never saw a SYN from the originator.
54     ## OTH          No SYN seen, just midstream traffic (a "partial connection" that was not later closed).
55     ## ==========   ===============================================
56     conn_state:   string          &log &optional;
...snip...
       ## Records the state history of connections as a string of letters.
71     ## For TCP connections the meaning of those letters is:
72     ##
73     ## ======  ====================================================
74     ## Letter  Meaning
75     ## ======  ====================================================
76     ## s       a SYN w/o the ACK bit set
77     ## h       a SYN+ACK ("handshake")
78     ## a       a pure ACK
79     ## d       packet with payload ("data")
80     ## f       packet with FIN bit set
81     ## r       packet with RST bit set
82     ## c       packet with a bad checksum
83     ## i       inconsistent packet (e.g. SYN+RST bits both set)
84     ## ======  ====================================================
85     ##
86     ## If the letter is in upper case it means the event comes from the
87     ## originator and lower case then means the responder.
88     ## Also, there is compression. We only record one "d" in each direction,
89     ## for instance. I.e., we just record that data went in that direction.
90     ## This history is not meant to encode how much data that happened to
91     ## be.
92     history:      string          &log &optional;
Using either of these fields we can make decisions based on the states of the connections as observed by Bro. Question #8 wants to know when the victim machine finally connected to the attacker's machine on port 4445/tcp. If you recall from the previous post, we showed that the machine attempted to connect to 4445/tcp approximately every 11 seconds or so. Since the originator made multiple attempts to connect, had we stuck with new_connection(), we would have had to store some kind of state and look for a response and session establishment between the two endpoints. Using connection_state_remove() Bro has already done the hard work for us! All we need to do is look for the state that indicates a successful connection and termination. According to the documentation for conn_state, "SF" indicates and normal connection establishment and termination.
event connection_state_remove(c: connection)
  {
  if (c$id$resp_p == 4444/tcp)
    {   
    print fmt("Start of connection to 4444/tcp: %s", strftime("%Y/%m/%d %H:%M:%S", c$start_time));
    print fmt("End of connection to 4444/tcp:   %s", strftime("%Y/%m/%d %H:%M:%S", c$start_time + c$duration));
    }   

  if (c$id$resp_p == 4445/tcp && c$conn$conn_state == "SF")
    print fmt("End of connection to 4445/tcp:   %s", strftime("%Y/%m/%d %H:%M:%S", c$start_time + c$duration));
  }
Start of connection to 4444/tcp: 2010/04/28 19:40:00
End of connection to 4444/tcp:   2010/04/28 19:41:26
End of connection to 4445/tcp:   2010/04/28 19:43:17  
It's easy to see why connection_state_remove() is used so often! In just a few minutes we were able to answer three more questions from the forensics challenge. Refactoring the code from the last post, it doesn't change our code all that much. It will, however, allow us to leverage more information as our scripting requirements expand.

2 The unbroly new_packet()

Solving questions like 7a and 7b require that we start looking at the packet level instead of at whole connections. Some short time spent grepping through the event.bif.bro file leads us to new_packet() and a warning that we should definitely respect!
479 ## Generated for every packet Bro sees. This is a very low-level and expensive
480 ## event that should be avoided when at all possible. Is's usually infeasible to
481 ## handle when processing even medium volumes of traffic in real-time. That
482 ## said, if you work from a trace and want to do some packet-level analysis,
483 ## it may come in handy.
484 ##
485 ## c: The connection the packet is part of.
486 ##
487 ## p: Informattion from the header of the packet that triggered the event.
488 ##
489 ## .. bro:see:: tcp_packet packet_contents 
490 global new_packet: event(c: connection, p: pkt_hdr);   
Using new_packet() generates a lot of overhead! Were you to use it on live traffic, you'd like as not bring your sensor to its knees as it attempts to generate an event for every new packet. For example, running a pair of test events against the evidence trace file from the challenge shows us the extra load brought to bear on Bro.
event new_connection(c: connection)
  {
  print "new connection";
  }
event new_packet(c: connection, p: pkt_hdr)
  {
  print "new packet";
  }
bro -r evidence06.pcap event_test.bro  | grep -i "new packet" | wc -l
2554
bro -r evidence06.pcap event_test.bro  | grep -i "new connection" | wc -l
123
Seeing the difference between packet level analysis and connection level analysis, we can see why the documentation in Bro includes such a warning. Let's take a look at a sample of the pkt_hdr passed to the new_packet() by looking at init-bare.bro.
 999 ## A packet header, consisting of an IP header and transport-layer header.
1000 ##
1001 ## .. bro:see:: new_packet
1002 type pkt_hdr: record {
1003   ip: ip_hdr; ##< The IP header.
1004   tcp: tcp_hdr &optional; ##< The TCP header if a TCP packet.
1005   udp: udp_hdr &optional; ##< The UDP header if a UDP packet.
1006   icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet.
1007 };  
As you can see, much of the pkt_hdr type is a collection of other types. Since we're only interested in the IP and TCP data types for the challenge, we're going to need to identify the fields in iphdr and tcphdr respectively.
944 ## Values extracted from an IP header.
945 ##
946 ## .. bro:see:: pkt_hdr discarder_check_ip
947 type ip_hdr: record {
948   hl: count;    ##< Header length in bytes.
949   tos: count;   ##< Type of service.
950   len: count;   ##< Total length.
951   id: count;    ##< Identification.
952   ttl: count;   ##< Time to live.
953   p: count;   ##< Protocol.
954   src: addr;    ##< Source address.
955   dst: addr;    ##< Destination address.
956 };
969 ## Values extracted from a TCP header.
970 ##
971 ## .. bro:see:: pkt_hdr discarder_check_tcp
972 type tcp_hdr: record {
973   sport: port;    ##< source port.
974   dport: port;    ##< destination port
975   seq: count;   ##< sequence number
976   ack: count;   ##< acknowledgement number
977   hl: count;    ##< header length (in bytes)
978   dl: count;    ##< data length (xxx: not in original tcphdr!)
979   flags: count;   ##< flags
980   win: count;   ##< window
981 };
Once processed by Bro, pkthdr contains the pertinent information based on the Layer 4 information it observed for the packet.
new packet: [ip=[hl=20, tos=0, len=337, id=47, ttl=128, p=6, src=10.10.10.70, dst=10.10.10.10], tcp=[sport=1035/tcp, dport=8080/tcp, seq=3905816263, ack=3420183379, hl=20, dl=297, 
new packet: [ip=[hl=20, tos=0, len=40, id=9360, ttl=64, p=6, src=10.10.10.10, dst=10.10.10.70], tcp=[sport=8080/tcp, dport=1035/tcp, seq=3420183379, ack=3905816560, hl=20, dl=0, fl
new packet: [ip=[hl=20, tos=0, len=1500, id=9361, ttl=64, p=6, src=10.10.10.10, dst=10.10.10.70], tcp=[sport=8080/tcp, dport=1035/tcp, seq=3420183379, ack=3905816560, hl=20, dl=146
Question 7 wants us to determine how often the TCP initial sequence number (ISN) and IP ID change for the repeated failed connection attempts to port 4445/tcp. The pkthdr data type provides those values in p$tcp$seq and p$ip$id respectively. Dumping the contents of these values is easy and we can likely answer our questions just visually inspecting those values. So far, I don't think I've started a single script or a test without first dumping pertinent fields and seeing what kind of information I can gather and how it's going to effect the resulting script. Not only has it been good practice to solidify some of the common data structures in my mind, but it's also been a good way to keep a smooth flow between what I see in trace file and what I attempt to do in script land.
1 event new_packet(c: connection, p: pkt_hdr)
2   {
3   if (c$id$resp_p == 4445/tcp && c$history == "")
4     print fmt("new_packet(): ip_id: %s tcp sequence: %s", p$ip$id, p$tcp$seq);
5   }
You'll notice we use an if statement that matches on the responder's port and a blank c$history. What we're testing for is a packet with the SYN bit turned on which in the context of c$history would look like "S". However, it turns out that the c$history field is very aptly named! Bro will start building the c$history field only after it's seen the packet, meaning that the first time you'll see the "S" indicating that it saw an attempted SYN will be on the ACK packet being sent back to the originator. You can see this for yourself by altering if statement above to exclude the c$history test and including it in the print statement. It's a short detour but it illustrates just a tiny bit of the work being done behind the scenes for us when we handle events at a higher level.
1 event new_packet(c: connection, p: pkt_hdr)
2   {
3   if (c$id$resp_p == 4445/tcp)
4     print fmt("new_packet(): ip_id: %s tcp sequence: %s history: %s", p$ip$id, p$tcp$seq, c$history);
5   }
new_packet(): ip_id: 359   tcp_seq: 553522758   history: 
new_packet(): ip_id: 0   tcp_seq: 0   history: S
new_packet(): ip_id: 360   tcp_seq: 553522758   history: 
new_packet(): ip_id: 0   tcp_seq: 0   history: S
new_packet(): ip_id: 361   tcp_seq: 553522758   history: 
new_packet(): ip_id: 0   tcp_seq: 0   history: S
new_packet(): ip_id: 362   tcp_seq: 553800369   history: 
new_packet(): ip_id: 0   tcp_seq: 0   history: S
...snip...
new_packet(): ip_id: 597   tcp_seq: 1979373164   history: 
new_packet(): ip_id: 0   tcp_seq: 0   history: S
new_packet(): ip_id: 598   tcp_seq: 1979373164   history: 
new_packet(): ip_id: 0   tcp_seq: 1436350344   history: Sh
new_packet(): ip_id: 599   tcp_seq: 1979373165   history: Sh
new_packet(): ip_id: 24029   tcp_seq: 1436350345   history: ShA
new_packet(): ip_id: 24030   tcp_seq: 1436350349   history: ShAd
new_packet(): ip_id: 24031   tcp_seq: 1436351809   history: ShAd
new_packet(): ip_id: 600   tcp_seq: 1979373165   history: ShAd
You can see the c$history field populating itself one step behind. Of course, if we stick to using connection_state_remove this will be completely transparent to us.
Let's get back to solving the challenge. Running the script that matches based on a blank c$history gives us:
new_packet(): ip_id: 359 tcp sequence: 553522758
new_packet(): ip_id: 360 tcp sequence: 553522758
new_packet(): ip_id: 361 tcp sequence: 553522758
new_packet(): ip_id: 362 tcp sequence: 553800369
new_packet(): ip_id: 363 tcp sequence: 553800369
new_packet(): ip_id: 364 tcp sequence: 553800369
new_packet(): ip_id: 365 tcp sequence: 554100968
new_packet(): ip_id: 366 tcp sequence: 554100968
new_packet(): ip_id: 369 tcp sequence: 554100968
new_packet(): ip_id: 370 tcp sequence: 554399680
new_packet(): ip_id: 371 tcp sequence: 554399680
new_packet(): ip_id: 372 tcp sequence: 554399680
new_packet(): ip_id: 373 tcp sequence: 554670846
...snip...  
It looks like the IP ID field and the TCP sequence number are incrementing at an interval of every packet and every three packets respectively. There are 120 connection attempts to the 4445/tcp port which is somewhat unwieldy to check visually. But wait, Bro isn't here to make you do any laborious counting. We can do this in scriptland!
We'll make use of Bro's tables and sets to confirm our suspicions about the intervals. Each time we see a SYN packet heading to port 4445/tcp we'll add that packet's ip id (p$ip$id) to a set. Since sets unique, if we see compare the number of attempts against the number of members in the ipid set (using |ipid|) using a bro_done() event they should be the same if an id is never reused. For the TCP sequence number we're going to need to use a table to track the sequence numbers and count them. We'll then treat the table like a poor man's stack and make comparisons.
 1 global attempts_count: count = 0;  
 2 global ip_id: set[count];
 3 global tcp_seq: table[count] of count;
 4 
 5 event new_packet(c: connection, p: pkt_hdr)
 6   {
 7   if (c$id$resp_p == 4445/tcp && c$history == "")
 8     {
 9     ++attempts_count;
10     if (p$ip$id !in ip_id)
11       add ip_id[p$ip$id];
12     if (p$tcp$seq !in tcp_seq)
13       tcp_seq[p$tcp$seq] = 1;
14     else
15       ++tcp_seq[p$tcp$seq];
16     }
17   }
18 
19 event bro_done()
20   {
21   local sequence_check: count;
22   local div: double;
23   for (seq in tcp_seq)
24     {
25     sequence_check = tcp_seq[seq];
26     delete tcp_seq[seq];
27     for (check in tcp_seq)
28       if (sequence_check == tcp_seq[check])
29         delete tcp_seq[check];
30     }
31   div = |ip_id| / attempts_count;
32   print fmt("IP ID changes every %.2f packet.", div );
33   if (|tcp_seq| == 0)
34     print fmt("TCP sequence changes ever %d packets.", sequence_check);
35   }
IP ID changes every 1.00 packet.
TCP sequence changes ever 3 packets.

3 Wrapping up

We covered two incredibly powerful events in this post, both of which allowed us to answer more about the SANS Network Forensic challenge, but only one of those events is viable for us in production: connection_state_remove(). While the new_packet() event has primarily niche uses due to the extra load it introduces, it's handy to parse trace files and to explore more about how Bro works and everything that Bro does behind the scenes. This is the last time we're likely to be working with the basics of Bro scripting. My intention is that Part 3 will include some more practical uses of Bro's scripting language. We'll use the things we learned in Part 1 and Part 2, but we're going to try to apply it the way Bro is intended to be used. While it's been useful (and fun!) to parse through a trace file with Bro, what we've been doing isn't something that can be deployed across an enterprise and that's where Bro really shines!

4 Code so far

If you're interested in some refactored code that includes the code we used in this post, here it is with a sample output.
 1 global earliest: time;                                     
 2 global source_ports: table[port] of time;
 3 global first_contact_4444: time;
 4 global last_contact_4444: time;
 5 global first_contact_4445: time;
 6 global last_contact_4445: time;
 7 global attempts_count: count = 0;
 8 global ip_id: set[count];
 9 global tcp_seq: table[count] of count;
10 
11 event bro_init() &priority=10
12   {
13   print "SANS Forensics Challenge";
14   print "========================";
15   earliest = current_time();
16   }
17 
18 event new_packet(c: connection, p: pkt_hdr)
19   {
20   if (c$id$resp_p == 4445/tcp && c$history == "")
21     {
22     ++attempts_count;
23     add ip_id[p$ip$id];
24     if (p$tcp$seq !in tcp_seq)
25       tcp_seq[p$tcp$seq] = 1;
26     else
27       ++tcp_seq[p$tcp$seq];
28     }
29   }
30 
31 event connection_state_remove(c: connection)
32   {
33   if (c$start_time < earliest )
34     earliest = c$start_time;
35   if (c$id$orig_h == 10.10.10.70 && c$id$resp_p == 4445/tcp)
36     {
37     if (c$id$orig_p !in source_ports)
38       source_ports[c$id$orig_p] = c$start_time;
39 
40     if (c$conn$conn_state == "SF")
41       {
42       first_contact_4445 = c$start_time;
43       last_contact_4445  = c$start_time + c$duration;
44       }
45     }
46 
47   if (c$id$resp_p == 4444/tcp)
48     {
49     first_contact_4444 = c$start_time;
50     last_contact_4444 =  c$start_time + c$duration;
51     }
52   }
53 
54 event bro_done()
55   {
56   local sports: vector of port;
57   local stime: vector of time;
58   local sequence_check: count;
59 
60   for (p in source_ports)
61     {
62     sports[|sports|] = p;
63     stime[|sports|] = source_ports[p];
64     }
65 
66   for (seq in tcp_seq)
67     {
68     sequence_check = tcp_seq[seq];
69     delete tcp_seq[seq];
70     for (check in tcp_seq)
71       if (sequence_check == tcp_seq[check])
72         delete tcp_seq[check];
73     }
74 
75   sort(stime);
76   sort(sports);
77   print "Question #4:";
78   print fmt("    Start of session to 4444/tcp: %s", first_contact_4444 - earliest);
79   print "Question #5:";
80   print fmt("    End of session to 4444/tcp: %s", last_contact_4444 - earliest);
81   print "Question #7a:";
82   if (|tcp_seq| == 0)
83     print fmt("    TCP Sequence changes every %s packets.", sequence_check);
84   print "Question #7b:";
85   print fmt("    Number of attempts: %s", attempts_count);
86   print fmt("    Number of ip id: %d", |ip_id|);
87   print "Question #7c:";
88   for (j in stime)
89     print fmt("    Delta Time: %s", stime[j+1] - stime[j]);
90   print "Question #8:";
91   print fmt("    Successful connection to 4445/tcp: %s", first_contact_4445 - earliest);
92   print "Question #10:";
93   print fmt("    Connection to 4445/tcp closed: %s", last_contact_4445 - earliest);
94   print "Connection Statistics:";
95   print "======================";
96   print fmt("First Packet: %s", strftime("%Y/%m/%d %H:%M:%OS", earliest));
97   print fmt("End of Capture: %s", strftime("%Y/%m/%d %H:%M:%S", network_time()));
98   print "========================";
99   }
SANS Forensics Challenge
========================
Question #4:
    Start of session to 4444/tcp: 1.0 sec 265.0 msecs 851.0 usecs
Question #5:
    End of session to 4444/tcp: 1.0 min 27.0 secs 587.0 msecs 153.0 usecs
Question #7a:
    TCP Sequence changes every 3 packets.
Question #7b:
    Number of attempts: 120
    Number of ip id: 120
Question #7c:
    Delta Time: 11.0 secs 785.0 msecs 487.0 usecs
    Delta Time: 11.0 secs 730.0 msecs 439.0 usecs
    Delta Time: 11.0 secs 795.0 msecs 35.0 usecs
    Delta Time: 11.0 secs 735.0 msecs 993.0 usecs
    Delta Time: 11.0 secs 884.0 msecs 180.0 usecs
    Delta Time: 11.0 secs 960.0 msecs 521.0 usecs
    Delta Time: 11.0 secs 907.0 msecs 572.0 usecs
Question #8:
    Successful connection to 4445/tcp: 2.0 mins 3.0 secs 674.0 msecs 198.0 usecs
Question #10:
    Connection to 4445/tcp closed: 3.0 mins 18.0 secs 441.0 msecs 345.0 usecs
Connection Statistics:
======================
First Packet: 2010/04/28 19:39:59
End of Capture: 2010/04/28 19:43:17
========================
Date: 2012-04-27T11:24-0400
Org version 7.8.06 with Emacs version 24
Validate XHTML 1.0

Friday, April 20, 2012

Solving Network Forensic Challenges with Bro :: Part 1

Solving Network Forensic Challenges with Bro :: Part 1

1 Getting to know Bro

Bro is a lot of things but one of its primary strengths is providing a programming language for network analysis. Out of the box, it's more than likely that Bro does more than you know and you'll get to spend a significant amount of time working with the logs to better understand your network and gain the skills to identify actions that require further investigation. If you're interested in some basic Bro scripting tutorials, the Bro team posted the tutorials from the Bro 2011 Workshop here.

If you've never used Bro before, a great way to get it up and running is to install Doug Burk's SecurityOnion Linux Distribution in a VM and work from there. Everything I've done in this post was done in a SecurityOnion VM.

2 Why use Bro?

Bro isn't a signature based IDS. In fact, calling Bro an IDS does it something of a disservice. It's more aptly described as a Network Security Monitoring application or framework. Bro's detections are based primarily on heuristics and when combined with a robust built-in programming language, it becomes a tool you can't ignore.

As for using Bro to solve an old SANS Network Forensics Challenge? While Bro's programming language is not very difficult it does require understanding a lot of underlying capability. I had the pleasure of hanging out with Seth Hall from the Bro project and often I'd hear him talk about how it only took him an hour or so to add some incredible functionality to Bro but it tends to overshadow the fact that Seth has spent years on Bro. Watching Seth whip something up in Bro is indistinguishable from wizardry and my hope is that, by pulling back a little bit of the curtain, I can better understand how to utilize Bro. Getting up to speed with Bro is a daunting task, especially if you avoid things that could be described as navel gazing! Using Bro to solve a forensics challenge's networking based questions, is navel gazing, but it forced me to dig into a solid amount of source code and documentation.

3 The SANS Forensic Challenge

The challenge I worked with had a basic set up. The victim (10.10.10.70/32) is exploited using a client-side spear phishing attack. You can read the full challenge as well as get the trace file on the SANS.org site.

The questions that stuck out as opportunities to use Bro were:

  1. When was the TCP session on port 4444 opened? (Provide the number of seconds since the beginning of the packet capture, rounded to tenths of a second. ie, 49.5 seconds)
  2. When was the TCP session on port 4444 closed? (Provide the number of seconds since the beginning of the packet capture, rounded to tenths of a second. ie, 49.5 seconds)
  3. Vick's computer repeatedly tried to connect back to the malicious server on port 4445, even after the original connection on port 4444 was closed. With respect to these repeated failed connection attempts:

    a. How often does the TCP initial sequence number (ISN) change? (Choose one.)

    1. Every packet
    2. Every third packet
    3. Every 10-15 seconds
    4. Every 30-35 seconds
    5. Every 60 seconds

    b. How often does the IP ID change? (Choose one.)

    1. Every packet
    2. Every third packet
    3. Every 10-15 seconds
    4. Every 30-35 seconds
    5. Every 60 seconds

    c. How often does the source port change? (Choose one.)

    1. Every third packet
    2. Every packet
    3. Every 30-35 seconds
    4. Every 10-15 seconds
    5. Every 60 seconds
  4. Eventually, the malicious server responded and opened a new connection. When was the TCP connection on port 4445 first successfully completed? (Provide the number of seconds since the beginning of the packet capture, rounded to tenths of a second. ie, 49.5 seconds)
  5. Subsequently, the malicious server sent an executable file to the client on port 4445. What was the MD5 sum of this executable file?
  6. When was the TCP connection on port 4445 closed? (Provide the number of seconds since the beginning of the packet capture, rounded to tenths of a second. ie, 49.5 seconds)

The challenge included finding the MD5 and filename of files downloaded, but as you'll see later a specific aspect of the tracefile prevents us from doing that.

4 Learning by failing

The primary reason for challenging myself to solve a network forensics challenge with Bro was to develop both a better understanding of how Bro works and to push myself to write more bro scripts. To wit, I spent a good deal of time failing and much of this post is going to include the process I used to find the information I needed to re-orient myself and make headway.

4.1 Finding Connections

The command line utility tshark is wireshark for CLI lovers and one of my most loved tools. Whenever, I'm looking at pcaps, I try to get a bird's eye view of the what is happening using tshark. For example, a common tshark command I use is to print the source ip and port and the destination ip and port by using the '-T fields' command line option. To save space, I've applied the common 'sort | uniq -c | sort -n' command to tell my terminal to show and count the unique entires. For reference, when run without the sort commands, the output is 2554 lines long.

tshark -r evidence06.pcap -T fields -e ip.src -e tcp.srcport -e ip.dst -e tcp.dstport | sort | uniq -c | sort -n
  1 10.10.10.70        10.10.10.255    
  5 10.10.10.70    1035    10.10.10.10 8080
  8 10.10.10.10    8080    10.10.10.70 1035
 15 10.10.10.10    4445    10.10.10.70 1037
 15 10.10.10.10    4445    10.10.10.70 1038
 15 10.10.10.10    4445    10.10.10.70 1039
 15 10.10.10.10    4445    10.10.10.70 1040
 15 10.10.10.10    4445    10.10.10.70 1041
 15 10.10.10.10    4445    10.10.10.70 1042
 15 10.10.10.10    4445    10.10.10.70 1043
 15 10.10.10.70    1037    10.10.10.10 4445
 15 10.10.10.70    1038    10.10.10.10 4445
 15 10.10.10.70    1039    10.10.10.10 4445
 15 10.10.10.70    1040    10.10.10.10 4445
 15 10.10.10.70    1041    10.10.10.10 4445
 15 10.10.10.70    1042    10.10.10.10 4445
 15 10.10.10.70    1043    10.10.10.10 4445
263 10.10.10.70    1044    10.10.10.10 4445
424 10.10.10.70    1036    10.10.10.10 4444
664 10.10.10.10    4445    10.10.10.70 1044
979 10.10.10.10    4444    10.10.10.70 1036

When I had started to look at using Bro for this, I tried to cast a wide net and started with the event "connection_established" which is exported from Bro's event.bif.bro file.

187 ## Generated for an established TCP connection. The event is raised when the
188 ## initial 3-way TCP handshake has successfully finished for a connection.
189 ##
190 ## c: The connection.
...snip...
198 global connection_established: event(c: connection);

Anytime the three way TCP handshake (SYN -> SYN/ACK -> ACK) has completed, this event should fire and since Bro is stream based, we should be able to produce a list of connections from the libpcap file provided in the challenge.

Bro uses the connection as a datatype, if you search through your base/init-bare.bro file you'll find the documentation for this type.

188 # A connection. This is Bro's basic connection type describing IP- and
189 # transport-layer information about the conversation. Note that Bro uses a
190 # liberal interpreation of "connection" and associates instances of this type
191 # also with UDP and ICMP flows.
192 type connection: record {
193   id: conn_id;  ##< The connection's identifying 4-tuple.
194   orig: endpoint; ##< Statistics about originator side.
195   resp: endpoint; ##< Statistics about responder side.
196   start_time: time; ##< The timestamp of the connection's first packet.
197   ## The duration of the conversation. Roughly speaking, this is the interval between
198   ## first and last data packet (low-level TCP details may adjust it somewhat in
199   ## ambigious cases).
200   duration: interval;
201   ## The set of services the connection is using as determined by Bro's dynamic
202   ## protocol detection. Each entry is the label of an analyzer that confirmed that
203   ## it could parse the connection payload.  While typically, there will be at
204   ## most one entry for each connection, in principle it is possible that more than
205   ## one protocol analyzer is able to parse the same data. If so, all will
206   ## be recorded. Also note that the recorced services are independent of any
207   ## transport-level protocols.
208         service: set[string];
209   addl: string; ##< Deprecated.
210   hot: count; ##< Deprecated.
211   history: string;  ##< State history of TCP connections. See *history* in :bro:see:`Conn::Info`.
212   ## A globally unique connection identifier. For each connection, Bro creates an ID
213   ## that is very likely unique across independent Bro runs. These IDs can thus be
214   ## used to tag and locate information  associated with that connection.
215   uid: string;
216 };

You can check out the documentaiton from the Bro site if you want to explore the connection data type further. As you can see each connection is itself a collection of other datatypes to include endpoints, time strings, count. Bro gives us access to the whole fire hose of network information even in script land! Let's take a look at what we can see with the new_connection() event.

mac@securityonion-Analyst:~/challenges/SANS Forensic$ bro -r evidence06.pcap challenge2.bro 
[id=[orig_h=10.10.10.70, orig_p=1036/tcp, resp_h=10.10.10.10, resp_p=4444/tcp], orig=[size=0, state=4, num_pkts=1, num_bytes_ip=48], resp=[size=0, state=4, num_pkts=0, num_bytes_ip=0], start_time=1272498000.577135, duration=0.000071, service={

}, addl=, hot=0, history=Sh, uid=XRD3DR2rr51, dpd=<uninitialized>, conn=<uninitialized>, extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=<uninitialized>, ftp=<uninitialized>, http=<uninitialized>, http_state=<uninitialized>, irc=<uninitialized>, smtp=<uninitialized>, smtp_state=<uninitialized>, ssh=<uninitialized>, ssl=<uninitialized>, syslog=<uninitialized>]
[id=[orig_h=10.10.10.70, orig_p=1044/tcp, resp_h=10.10.10.10, resp_p=4445/tcp], orig=[size=0, state=4, num_pkts=1, num_bytes_ip=48], resp=[size=0, state=4, num_pkts=0, num_bytes_ip=0], start_time=1272498122.985483, duration=0.000097, service={

}, addl=, hot=0, history=Sh, uid=LGk3mPtc00b, dpd=<uninitialized>, conn=<uninitialized>, extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=<uninitialized>, ftp=<uninitialized>, http=<uninitialized>, http_state=<uninitialized>, irc=<uninitialized>, smtp=<uninitialized>, smtp_state=<uninitialized>, ssh=<uninitialized>, ssl=<uninitialized>, syslog=<uninitialized>]

Even a cursory glance shows that while we're seeing a lot of data from Bro we're not seeing as many connections as we should! Not only did the challenge's description tell us that there was HTTP traffic but compared to the tshark output, we're looking at significantly less entries than we should be seeing even given the disparity between connection and stream based analyzers. Given the documentation from the connection_established event above, it should be somewhat obvious as to why some of the streams are missing. If connection_established only fires when three way handshake is present then we're missing the three way handshake for the HTTP connection. Let's check the first three packets of the trace file and see if they match up with a TCP 3-way handshake.

tshark -r evidence06.pcap -c 3 -T fields -e tcp.flags
0x18
0x10
0x10

That is certainly not a TCP handshake, so it looks like the trace file starts in the middle of a stream. These values show us a PSH,ACK and two ACK flags, instead of the standard 3-way handshake.

For reference, a TCP handshake should look like this:

tshark -r browse.pcap -c 3 -T fields -e tcp.flags   
0x02
0x12
0x10

If you're curious as to how the hex values above map to TCP flags, it's a binary to hex conversion using the following table.

CWRECEURGACKPSHRSTSYNFIN
1286432168421

We can double check our findings with a little bit more abuse of tshark and look for any SYN/ACK flags set which would indicate a response in the TCP 3-way handshake.

tshark -r evidence06.pcap -T fields -e ip.src -e ip.dst -e tcp.flags | awk '{if ($3 == "0x12") print $0}'
10.10.10.10     10.10.10.70     0x12
10.10.10.10     10.10.10.70     0x12

So, not only will we miss the HTTP session that is already started, we're also not going to see any traffic that doesn't have a 3-way handshake. Since the challenge references rejected connections, we're definitely going to need to back to the base.bif.bro file and find a better solution.

Some quick perusing and searching for significant terms lead me to the new_connection() event.

133 ## Generated for every new connection. The event is raised with the first packet
134 ## of a previously unknown connection. Bro uses a flow-based definition of
135 ## "connection" here that includes not only TCP sessions but also UDP and ICMP
136 ## flows.
137 ##
138 ## c: The connection.   
...snip...
149 ##    Handling this event is potentially expensive. For example, during a SYN
150 ##    flooding attack, every spoofed SYN packet will lead to a new
151 ##    event.
152 global new_connection: event(c: connection);
153 

This event is right up our alley! It doesn't care about the 3-way handshake, if it sees a packet it hasn't seen before it fires. Let's change our initial bro script to replace connection_established() with new_connection().

event new_connection(c: connection)
      {                 
      print c;
      }
[id=[orig_h=10.10.10.70, orig_p=1035/tcp, resp_h=10.10.10.10, resp_p=8080/tcp], orig=[size=0, state=0, num_pkts=0, num_bytes_ip=0], resp=[size=0, state=0, num_pkts=0, num_bytes_ip=0], start_time=1272497999.311284, duration=0.0, service={

}, addl=, hot=0, history=, uid=P0lEbfBMXFh, dpd=<uninitialized>, conn=<uninitialized>, extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=<uninitialized>, ftp=<uninitialized>, http=<uninitialized>, http_state=<uninitialized>, irc=<uninitialized>, smtp=<uninitialized>, smtp_state=<uninitialized>, ssh=<uninitialized>, ssl=<uninitialized>, syslog=<uninitialized>]
[id=[orig_h=10.10.10.70, orig_p=1036/tcp, resp_h=10.10.10.10, resp_p=4444/tcp], orig=[size=0, state=0, num_pkts=0, num_bytes_ip=0], resp=[size=0, state=0, num_pkts=0, num_bytes_ip=0], start_time=1272498000.577135, duration=0.0, service={

}, addl=, hot=0, history=, uid=y4plS32M81e, dpd=<uninitialized>, conn=<uninitialized>, extract_orig=F, extract_resp=F, dns=<uninitialized>, dns_state=<uninitialized>, ftp=<uninitialized>, http=<uninitialized>, http_state=<uninitialized>, irc=<uninitialized>, smtp=<uninitialized>, smtp_state=<uninitialized>, ssh=<uninitialized>, ssl=<uninitialized>, syslog=<uninitialized>]

Now our output is more reasonable and there are 369 lines of it! That's more like it!

Let's make some formatting changes to our script so it's little more easy to quickly parse the output. We'll make it suggestive of the tshark output above.

In Bro, we can use the '$' to dereference, so if we wanted the origh, we could walk the output above and build our expression: c$id$origh. Bro also provides an fmt() conversion that operates much like printf so we can build a nicely formatted output with four string placeholders (%s) and the data we'd like to see(c$id$orig_h, c$id$orig_p, c$id$resp_h, and finally c$id$resp_p).

event new_connection(c: connection)
   {
   print fmt("New Connection => orig: %s %s resp: %s %s", c$id$orig_h, c$id$orig_p, c$id$resp_h, c$id$resp_p); 
   }
New Connection => orig: 10.10.10.70 1035/tcp resp: 10.10.10.10 8080/tcp
New Connection => orig: 10.10.10.70 1036/tcp resp: 10.10.10.10 4444/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp
...snip...
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp

Now, we have the event fired for each new connection and we can see the originator port and the responder port. Just having this information we can start building an answer to question 9c from the challenge. The question is specific about the originating host and the responder's port so let's be specific as well. A simple logical if statement will give us the ability to only print our nicely formatted output if the originating host is the victim machine (10.10.10.70) and the responder's port is 4445/tcp. The question is also specific about how long it takes for the originator to switch ports so we'll add the start_time to our output.

if (c$id$orig_h == 10.10.10.70 && c$id$resp_p == 4445/tcp)
   print fmt("New Connection => orig: %s %s resp: %s %s time: %s", c$id$orig_h, c$id$orig_p, c$id$resp_h, c$id$resp_p, c$start_time); 
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498035.258314
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498035.594943
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498036.141827
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498036.142471
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498036.6887
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498037.235554
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498037.23652
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498037.782456
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498038.329315
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498038.329973
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498038.876194
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498039.313691
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498039.314346
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498039.860571
New Connection => orig: 10.10.10.70 1037/tcp resp: 10.10.10.10 4445/tcp time: 1272498040.298079
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498047.043801
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498047.40741
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498047.954312
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498047.954969
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498048.391806
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498048.938686
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498048.939329
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498049.485544
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498050.032408
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498050.033078
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498050.579291
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498051.016808
New Connection => orig: 10.10.10.70 1038/tcp resp: 10.10.10.10 4445/tcp time: 1272498051.017456
...snip...
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498107.236156
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498107.782395
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498108.329244
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498108.329911
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498108.876468
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498109.313638
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498109.314295
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498109.860522
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498110.298011
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498110.298669
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498110.844862
New Connection => orig: 10.10.10.70 1043/tcp resp: 10.10.10.10 4445/tcp time: 1272498111.282386
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498118.057545
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498118.391735
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498118.938626
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498118.939275
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498119.485504
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498120.032355
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498120.033013
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498120.579247
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498121.016727
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498121.01738
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498121.563621
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498122.001099
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498122.001752
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498122.548001
New Connection => orig: 10.10.10.70 1044/tcp resp: 10.10.10.10 4445/tcp time: 1272498122.985483

Now we've got just the traffic we're interested in for question 9c. However, counting is still boring, let's have Bro count for us! To do this we'll use a table to map a originator port to the time when we first see that port attempting to connect to the attackers 4445/tcp port. As each new_connection() event is handled, if we haven't seen the originator's port before, we create a new entry in the table for the originator port and map it to the c$start_time. Once we've processed each new_connection() event, we still need to create some valuable output from the data set we've created. The best way to do this is to use the event bro_done().

global source_ports: table[port] of time;

event new_connection(c: connection)
  {
  if (c$id$orig_h == 10.10.10.70 && c$id$resp_p == 4445/tcp)
    {   
    if (c$id$orig_p !in source_ports)
      source_ports[c$id$orig_p] = c$start_time;
    }   
  }

event bro_done()
  {
  local ptime: set[time];
  local sports: vector of port;
  local stime: vector of time;
  local inc: int = 0;

  for (p in source_ports)
    {   
    sports[inc] = p;
    stime[inc] = source_ports[p];
    inc+=1;
    }   
  sort(stime);
  sort(sports);
  for (j in stime)
    {   
    print fmt("Delta Time: %s", stime[j+1] - stime[j]);
    }           
  }
mac@securityonion-Analyst:~/challenges/SANS Forensic$ bro -r evidence06.pcap challenge2.bro
Delta Time: 11.0 secs 785.0 msecs 487.0 usecs
Delta Time: 11.0 secs 730.0 msecs 439.0 usecs
Delta Time: 11.0 secs 795.0 msecs 35.0 usecs
Delta Time: 11.0 secs 735.0 msecs 993.0 usecs
Delta Time: 11.0 secs 884.0 msecs 180.0 usecs
Delta Time: 11.0 secs 960.0 msecs 521.0 usecs
Delta Time: 11.0 secs 907.0 msecs 572.0 usecs   

Once we've run the script we get an output of the differences in time between port changes. The originator attempted to contact port 4445/tcp every 11.7 to 11.9 seconds. Which falls in the range of the 10-15 second option for question 9c!

Given what we've worked through in this blog post alone, it's actually rather simple to answer question ten as well!

global first_contact: time;   
event connection_established(c: connection)
      {
      if (c$id$resp_p == 4445/tcp)
         first_contact = c$start_time;
      }

event bro_done()
      {
      print strftime("Successful connection to 4445/tcp at %Y/%m/%d %H:%M:%S", first_contact);
      }
Successful connection to 4445/tcp at 2010/04/28 19:42:02

5 Wrapping up

Hopefully, this post has gotten you interested in looking at the Bro programming language. There are a lot of posts online about how great Bro is, but scarce few covering how to go about learning the scripting language. Bro's scripting language holds a lot of surprises in store for the freshly minted Bro acolyte and the most efficient way to go from acolyte to journeyman is to spend your time looking at the scripts already being used by Bro. As I worked through the challenge I would use grep to search through the scripts directory ( /usr/local/share/bro on Security Onion) for any relevant terms and read the documentation is the files returned. Think of the default scripts distributed with Bro as a pool of collective knowledge to dip your feet into from time to time.

In part two of this series we will pick up where we've left off and solve more of the questions from the SANs Network Forensics challenge. We'll also take another look at some of the code we used in this post as it /may/ not be the most "bro-ish" way to solve the problem. While we got the answers we needed, I suspect there are ways to do so in a way that fits more in line with how we will eventually write code to run in production.

Date: 2012/04/16

Org version 7.8.06 with Emacs version 24

Validate XHTML 1.0

Followers