HoneyProxy GSoC Wrap-Up

Live Bubble Animation

Live Bubble Script

Hey everyone,

with GSoC coming to an end for this year, let’s have a look at what we have accomplished over the last three months:

  • Integration into mitmproxy
  • User Interface redesigned from the ground up.
  • Completely overhauled traffic table, supporting up to a million (lazy-loading) rows.
  • New Filtering Syntax for searching flows (inherited from mitmproxy).
  • Improved Report Editor
  • DNSChef Integration
  • Live Bubble Report Script
  • Huge internal refactorings: HoneyProxys inner architecture is prepared for the future!
  • Support for multiple scripts and script arguments in mitmproxy
  • Standalone Windows Executable.

To sum it up, I’m really happy with the progress we made. Lots of exciting things made it into mitmproxy.


Searchbar

New Interface with searchbar and filtering syntax


So, when can we expect the next release? Long story short, there are still a few rough edges that will be ironed out as soon as @cortesi has completed the sqlite integration. He has been pretty busy over the last few months and we couldn’t complete this within the GSoC timeline, but I’m commited to continue my work on HoneyProxy mitmproxy as I did last year.


2013-07-16_23-11-22

Showing a DNS request with DNSChef


Overall, this was an exciting summer again and I’d like to thank everyone who helped over the course. There are four guys who deserve special attribution:
Aldo, thank you for patiently answering all my mails and the good discussions.
It’s a pleasure to work with you. :-)
Guillaume and Sebastien, thanks for your great mentoring.
David, thank you for organizing this HoneyNet GSoC again – you’re doing an invaluable job here.

Let’s keep on coding!

Cheers,
Max

PS: You like living on the bleeding-edge? Check out our dev snapshots and the GitHub repo!

Improving IPv6 Attack Detection-Final Blog Post

This is the final blog post about 6Guard(download link),  a honeyot-based IPv6 attack detector. Here I will introduce the overview of 6Guard in brief firstly, then describe the new features that was added to 6Guard in detail.

  • Introduction

6Guard is a honeypot-based IPv6 attack detector aiming at detecting the link-local level attacks, especially when the port-mirror feature of switch is unavailable.It can help the network administrators detect the link-local IPv6 attacks in the early stage. Currently 6Guard can detect most attacks initiated by The THC-IPv6 suit , the advanced host discovery methods used by Nmap, some attacks initialed by Evil Foca and Metasploit.
6Guard architecture
The picture above shows the architecture of 6Guard.  From the picture we can see that, 6Guard mainly use Python and Scapy, mainly has three moudles, Honeypot and Globalpot and Event Analysis. The Honeypot module is responsible for detetecting unicast attacks, The Globalpot module focus on detecting multicast attacks, and The Event Analysis moudule get event message from Honeypot and Globalpot and generate attack message if attack is detected.

  • New freatures and Detecting example

1. Project description

The goal of the project is to improve current IPv6 honeypot (6Guard) detection mechanism for various latest IPv6 attacks, get the results and have a proper logging method. Currently IPv6 honeypot (6Guard) is capable to detect certain IPv6 attacks. As there are various new IPv6 attacks technique discovered and tools were released since last year, we need to improve the detection mechanism. For example, it should be able to fully detect the attack scenarios with various Extension headers combination, fragmentation techniques, RA-Guard bypass tricks, packet DoS attempts and etc. We also need a proper logging mechanism for the collected results, e,g, a DB for better results analysis.

2. Fragmentation attack detection

As mentioned above, 6Guard can’t detect the attacks using fragmentation techniques. After we read the 6Guard code, we find that the root cause of this is that 6Guard lacks traffic reassembly, when attack packets are fragmented, 6Guard cannot reassemble the packets and detect the attacks. As a result, we often find that 6Guard is capable to detect the attacks initiated by “fake_advertise6″ and “fake_route6″, but can’t detect the attack initiated by “fake_advertise6 -D” and “fake_router6 -F”.Besides, I have read some recent materials about IPv6 attacks and defenses,and investigated the details of these attacks (see my notes).

In order to detect these attacks, we read the frag3 module in Snort, developed a defrag6 module, many of which are learned from snort and added it to 6Guard. This module can track fragments in certain time, and reassemble them.

Detecting Sample
[ATTACK]
Timestamp: 2013-10-03 11:43:22
Reported by: Globalpot
Type: DoS
Name: Fake Neighbor Advertisement to ff02::1
Attacker: [Unknown]
Victim  : [The whole network]
Target [ff02::1]
Source: [ff02::1]  MAC: 00:50:56:ac:71:ae (VMware, Inc.)
Utility: THC-IPv6: fake_advertise6
Packets: f2a2aa67c22bffa7eaa9100c9dd88b18.pcap

While we write the defrag6 module in 6guard, we find that the module is also responsible for detecting many other fragment tricks.
a) The TTL inspection. Attackers often use TTL to bypass the firewall, so the TTL inspection is to check the TTL value of each fragment, if the TTL value is too small (The firewall can see the fragment, but the victim can’t see it), 6Guard will discard the fragment and show warning message.
b)  tiny fragments detection, Many DoS attackers craft tiny fragments to consume the resource of the system, so if the fragment is too small, 6Guard will show warning message.
c) Apart from that, 6Guard can also detect “Timeout Fragment”,”Too big reasssembled Packet”,”Overlapping Fragments”attacks.

Detecting Sample
[EVENT]
Timestamp: 2013-10-03 11:33:25
Reported by: Globalpot
Type: Invalid Fragment
Name: Overlapping Fragment
Utility: THC-IPv6-fragmentation6, Crafting malformed Packets
Packets: 13c4817263f3bc5e51fbdf81729d314b.pcap

3. Extension headers inspection

When we added the defragment module to 6guard, 6guard cannot reassembly the fragments normally due to the abuse of extension headers. For example, if a fragment has two fragment extension header with different offset in them, the defragment module can not decide which fragment header to use. What’s worse, the RFC2460 says “IPv6 nodes must accept and attempt to process extension headers in any order and occurring any number of times in the same packet”, so we can not discard the malformed packet directly.

To solve this problem , we develop a exthdr module which can inspect the extension headers in detail. Now 6Guard can parse the packet headers , when the abnnormal extension headers is detected , it will report it and try to correct the abnormal packet. for example, if there are multiple occurrence of fragment extension header in one packet, we just remove the redundant headers and only keep the first one.

Detecting Sample
[ATTACK]
[EVENT]
Timestamp: 2013-10-03 11:41:14
Reported by: Globalpot
Type: Invalid Extension Header
Name: Invalid Extension Header in packets
Utility: Crafting malformed Packets
Packets: c05e77d836c0c086c88e4d61814fc373.pcap

4. Logging mechanism improvement

Another important goal in this project is to improve the logging mechanism of 6Guard. I have investigated the log mechanism of different honeypots(see my note), and improved the logging mechanism of 6guard refer to other honeypots.

Currently the new 6guard have three way to log the attacks, textlog, hpfeeds, and mongodb.And they can be confiured in the new global configfile(6guard.cfg). the three log moudle has the same parent class(dblog.py).
a) Textlog is same as previous attack.log, which means log the attack information into a file, you can file the log file in log/text.log
b) Hpfeedslog means 6guard can publish the attack information to hpfeeds center, and we also add hpfeeds_subsrciber module to test it. You can run “python testing/hpfeeds_subscriber.py -i 96wzTQHn -s Ajgfx9GhgnPuhFey –host hpfeeds.honeycloud.net -p 20000 -c 6guard.attacks subscribe” to subscribe the attack information.
c) mongodblog means 6guard can store the information into mongodb database for future analysis.

5. Other features

a) Detect SLAAC Mitm attack and Neighbor Advertisement Spoofing initialed by Evil Foca
b) Detect ipv6_neighbor_router_advertisement and ipv6_multicast_ping in Metasploit

Detecting Sample
[ATTACK]
Timestamp: 2013-10-03 22:05:19
Reported by: Honeypot-apple-25:76:C0
Type: SLAAC attack
Name: SLAAC Mitm attack
Attacker: [fe80::20c:29ff:fe65:cb60]  00:0c:29:65:cb:60 (VMware, Inc.)
Victim  : [Honeypot-apple-25:76:C0]  34:15:9E:25:76:C0 (Apple, Inc)
Utility: Evil Foca: SLAACv6 attack
Packets: ff32b1215c9d98cc130bd01a2c12531d.pcap

Conclusion

It was a exciting summer and I have learned a lot from this GSoC project. Many thanks to my mentor Tan Kean Siong and my backup mentor Weilin Xu, I cannot complete this project without their help. Thanks to The Honeynet Project and Google for give me the precious opportunity to learn.

Wrapping up: Beeswarm

So the coding period of GSoC 2013 is officially over, and I want to talk about how far Beeswarm has come in the last five months. I think the best way to explain my work is by introducing and explaining the Beeswarm terminology first.

What exactly is Beeswarm?

Simply put, Beeswarm is a special honeypot, with a system of automated clients, which use that honeypot. Information about attackers is gathered by analyzing the difference between the expected and actual traffic at the server end of the honeypot.

In technical terms, Beeswarm is a Honeytoken project, that aims to use client side traffic as the Honeytoken. It consists of three major components:

  • Hive

    The Hive is the actual Honeypot server. It runs on gevent and supports multiple protocols such as Telnet, SSH, SMTP, FTP, etc. It can also be used as a standalone Honeypot in the traditional way (without the other two components).

  • Feeder

    Essentially, the Feeder is the client part of Beeswarm. The job of the Feeder is to
    actively attract attackers to the Honeypot (Hive). It runs client sessions on the remote Hive server. These clients are “semi-intelligent”. By “semi-intelligent”, I mean that they have a rudimentary intelligence which allows them to login to the different Hive servers, and perform legitimate actions on them. For example, the FTP Feeder logs into the server (Hive), lists the files, and then either downloads or deletes them.

  • Beekeeper

    The Beekeeper is the Web based management interface for Beeswarm. It analyzies and classifies the client sessions that are made on the Hive. In fact, one of its most
    important tasks is to correlate the data from Hive and Feeder, and determine if a particular session was from a malicious attacker. Beekeeper allows easy deployment
    of Hives and Feeders by providing customized bootable ISOs for them.

A simple use case, where one could detect MITM attacks within a network:

beeswarm_user_case.png

Through the course of the last five months, a lot of progress has been made in the following areas:

  1. Interactivity of Hive capabilities
  2. The Beekeeper web-interface
  3. Intelligence of the automated Feeder clients
  4. Deployment

I’ll give a brief update on each of these areas, focusing on the new things, and places where creative ideas are used, which could also be beneficial outside of this particular project.

Interactivity of Hive Protocols

This is the part of the project that I enjoyed working on the most. As mentioned in my previous posts, the Hive now has very interactive SSH and Telnet capabilities. It also boasts a fully functional SMTP server, that can capture emails being sent. The FTP capability too is largely functional, although it is probably not completely standards-compliant. It does, however, play along nicely with most FTP clients. The HTTP and HTTPs capabilities too, serve real pages from a directory. This means that it is quite possible to emulate devices like routers, which usually have a very simple web-ui.

The Beekeeper Web Interface

Beekeeper is the management and information processing part of Beeswarm. It is
built on top of Flask, and Twitter Bootstrap. It allows administrators to
view the current status, add new Hives/Feeders, and download bootable ISO files
for them. Here are a few screenshots of the Beekeeper, in action:

Beeswarm Main Window

Add new Hive

Apart from the management tasks, Beekeeper also does the job of classifying
the sessions done on the Hive, and maintaining the database.

Intelligence of automated Feeder clients

I’ll keep this section short, since this is already discussed in my previous blog
posts. I think a small summary will be useful though:

FTP
The client lists the files, and randomly downloads a few of them. It also
sends FTP commands such as SYST, in order more accurately emulate real clients.
HTTP(s)
The HTTP(s) clients extract the links from the root document (/index.html) and
start visiting them. They stop after a randomly generated depth is reached.
POP3(s)
The POP3(s) clients retrieve the list of available emails, and then deletes them
all, one by one. This is exactly what some mail clients do.
SMTP
The SMTP client chooses a few emails from the spam corpus that comes with Beeswarm,
and sends a random number of them to the Hive SMTP capability.
SSH/Telnet
These are more complicated than the previous examples. They use a number of methods
to act intelligently, as explained in my previous blog posts.

Deployment

The deployment of new Hives and Feeders has become much easier, since Beeswarm now has
the ability to generate customized bootable ISO files for each of them. The ISOs are
basically generated using Debian Live. A very interesting
approach was suggested by my mentor to reduce the time required for ISO generation.
It goes something like this:

  • Generate a “Base” ISO, which has a dummy tarball file embedded into it. This file
    is filled with a specific pattern (I chose a series of ’0×07′ bytes).
  • Whenever a new custom ISO is required, find and overwrite the special pattern
    mentioned above, with an actual tar file.
  • Use it inside the ISO after boot.

This brought down the time for ISO generation from about 20 minutes to around 13 seconds.

In order to view status easily on the bootable ISOs, I also added a Curses based UI to
Beeswarm. It’s a simple status screen, not a management interface, but it does feature
a running log of current events:

Hive Curses UI

Conclusion

Working on Beeswarm was an amazing experience. If I had to make a list of new things that
I learnt during this time, I’m pretty sure it would fill up a wall.

I want to thank my mentor, Johnny Vestergaard, for this awesome learning experience. He pulled me out of a tight spot more often than I’d like to admit :-) . I also thank Lukas Rist, my backup mentor, for teaching me about Flask, Bootstrap, and web-frameworks in general. Without that excellent web-development session, I would probably still be trying to fix CSS and HTML on the Beekeeper Web-app. Thanks, The Honeynet Project, for accepting my application. Also, thanks to Google for paying me and giving me the opportunity to learn. These two things seldom occur simultaneously. Long live GSoC! :)

 

Thug Distributed Task Queuing – Final Blog Post

Hi Everyone,

This is the final blog post about the Thug Distributed Task Queuing Project. Here I will describe the Distributed feature that we have added to the already existing project Thug, by which now analyses of URL’s has become easy and efficient.

Project Overview:

Previously Thug worked like a stand-alone tool and does not provide any way to distribute URL analysis tasks to different workers. For the same reason it was neither able to analyze difference in attacks to users according to their geolocation (unless it is provided a set of differently geolocated proxies to use obviously). Now after implementation of this project we are able to solve both problems by creating a centralized server which will be connected to all the Thug instances running across the globe and will distribute URLs (potentially according to geolocation analysis requirements). After that the clients will consume the tasks distributed by centralized server and will store the results in database after processing them.

Server:

On Server we are able to handle all the Clients(worker) and now we are able to distribute URL’s on the basis of clients geolocation i.e. if we want to check the working of a URL in a particular country then we can put that URL in that country and then a client connected from that country will process the URL and give back the result. So by this we are not only able to distribute URLs among clients running from all over the world but now we are also able to analyze the attacks to particular countries.

These are the working demos of flower(celery monitoring tool) to see workers processing tasks:

Workers connected from India and processing tasks:

Screenshot from 2013-09-29 13:26:34

Tasks description which are running or completed:

Screenshot from 2013-09-29 13:30:26

 

Worker:

Workers are the clients or Thug Instances running from all over the world. They are connected to 2 types of queues: Generic queue and its nation queue(like Indian client would be connected to India queue, so on). Now whenever server puts up URLs in the queues workers connected to that queues consumes the URLs and after processing them sends back the results to server for further processing.

 

Architecture:

 

Development:

Here I want to describe about the optimizations on which I worked and currently working on. I made 2 other prototypes in which I tried to do some optimizations and currently also reside in Github repo. In 1st prototype I tried to distribute URLs according to clients system performance i.e. if a clients system is super fast so we will give him more URLs as compared to others. This was done using Redis DB, worker will calculate a performance value in Redis Sorted Set after every 2 min.(example) and then whenever Server wants to distribute URLs it will query the Redis Sorted Set and will allocate URLs to clients having more system performance value(as better system performance means better system). So by this we might be able to get the quicker response from the clients, but here a problem occurred i.e. we were facing difficulty related to distributing URLs according to geolocation.

2nd prototype optimization was very simple as we just increased the Prefetch Value of systems having better system performance value, so the clients whose systems are better than other will process more URLs than others as they will prefetch more URLs than others.

 

That’s all I wanted to share about my Project. But in total, this was super exciting summer and I liked & learned a lot by participating in GSoC.

I want to thank everyone who helped me in completing my project:

1st and most important is Angelo Sir(mentor) who helped me a lot in his busy times also, he answered my each and every dumb query. Thanks a lot sir, really he is an amazing guy :)

Then I want to thank Sebastian Sir(backup mentor) and Kevin Sir. I did some great discussions with Sebastian sir which helped me a lot in doing project and Kevin sir worked as an unofficial mentor as he helped me a lot in working with Celery plus he advised me a lot while implementing the project.

I also want to thank David sir for organizing & managing the Honeynet GSoC so well and I would also like to thank Tan Kean Siong sir for starting a Introduction mailing list for giving students a platform to introduce themselves.

Let’s always keep on working!

ThugD github repo can be find at https://github.com/Aki92/Thug-Distributed.

More details & documentation about project can be find at http://aki92.github.io/Thug-Distributed/.

Thanks,
Akshit Agarwal

PwnyPot management integration with Cuckoo – Final Blogpost

The official pencil down date has passed yesterday. With this last blog post I want to give an overview about what I have achieved during the last 3 months, give you a simple introduction how to use PwnyPot with Cuckoo and reflect my experiences with GSoC and the Honeynet Project.

Overview

As the title of my project already makes clear, the original plan was to make use of the well-known malware analysis tool Cuckoo to manage automatic analysis with the high-interaction client-side honeypot PwnyPot.

Before I started my project, PwnyPot consisted of a DLL that got injected to processes, that have been chosen through a GUI on a guest system / analysis system. Through the same GUI the options for analysis and prevention techniques had been assigned.

final_pwnypot

 

All analysis information was logged on the guest system into simple log files and one XML-document. Instead of writing a complete new management software for automated execution and analysis of malware, I decided to modify PwnyPot slightly to work with Cuckoo. Cuckoo is developed continuously since a few years and easy to modify to your needs. The following main changes were necessary:

  • File transmission from PwnyPot.dll to the Cuckoo result server to transmit analysis information
  • Cuckoo Pipe inside the PwnyPot.dll to notify Cuckoo for new (sub) processes of the malware
  • Cuckoo processing module to parse analysis information
  • Cuckoo reporting module to display the results for example in HTML
  • Modify Cuckoo to read PwnyPot specific configuration
  • Enable to inject different DLLs via Cuckoo (instead of cuckoomon.dll)

The modular architecture of cuckoo permits to add all this functionality without touching much of the core files but by just adding new modules. I only changed the analyzer and the analyzer packages of the core to use Cuckoo as wanted. These files needed to be changed to allow injection of another DLL than cuckoomon.dll. As Mark (one of the co-developers of Cuckoo) told me later, this feature may also be interesting for others. Therefore I created this pull request.

The modifications on the code of PwnyPot were also quite easy, because the protocols (ResultServer, NamedPipe) that are used by Cuckoo to retrieve analysis information from the guest are pretty simple. The changes to PwnyPot, that were only necessary in order to work with Cuckoo, were written between preprocessor definitions. A new build configuration was created named “CuckooRelease” which builds PwnyPot.dll with Cuckoo support. The old configuration “Release” still builds the DLL with the original configuration and analysis output.

Against my expectations I managed to implement all these changes, including tests and documentation, around mid-term evaluations.  My plan was to use the rest of the time with improving PwnyPot itself. At that time, my knowledge of concrete exploitation and exploit mitigation techniques was quite small. I decided to choose only a few features, from which I could expect to be implemented quite easy. After some research I decided to work on the following features

  • Detect direct DEP disable: SetProcessDEPPolicy, NtSetInformationProcess
  • Detect LdrHotPatchRoutine (cp. Technet Blog)
  • Prevent WriteProcessMemory overwrite
  • Structured Exception Handler Overwrite Protection (enable option for Win Vista+, own implementation for versions below)

How to use PwnyPot with Cuckoo

Both code repositories are hosted on github: PwnypotCuckoo. Inside the Pwnypot git I used the branch “cuckoo_integration” and inside the Cuckoo git the branch “pwnypot_integration”. If you just want to use Cuckoo with PwnyPot, there is no need to checkout out the Pwnypot repo. The pwnypot_integration branch already holds PwnyPot.dll with Cuckoo support.

Note: For a more detailed documentation of usage, features and configuration parameters please read the HTML documentation in cuckoo/docs/book/src or build it inside this directory with `make html`. The documentation to setup Cuckoo, the host and the guest can be found on this website.

Configuration of PwnyPot is done in conf/pwnypot.conf.  To analyze a file or URL you can use the web interface or the submit.py script inside the utils folder. Example usage with submit.py:

./submit.py –package ie –options dll=PwnyPot.dll –url http://example.com

The web interface can be started by changing directory to web/ and by executing

python manage.py runserver

For the web interface of Cuckoo 1.0 you need to have mongodb enabled in conf/reporting.conf. After analysis you should see your results, if you follow the link Recent in the head navigation of the web interface.

If you allow malware execution in the configuration for pwnypot, cuckoomon.dll is injected into the malware process. Thereby the behavior of the malware after exploitation is analyzed by Cuckoomon. You will find this information in the PwnyPot tab as “Malware Execution”, if such behavior analysis has been performed:

final_mw

Conclusion

I do not regret, that I have participated in this years Google Summer of Code. It was really a huge amount of work, but I have also learned a lot. Special thanks to my mentors Georg Wicherski, Mark Schlösser and Shahriyar Jalayeri, who were always available for questions. I will try to continue to contribute to both projects in the future.

Network Analyzer Project Updates (Hao Ma) – Week 13 – More Examples

1. Test for a Trojan binary file:

$ sudo python ovizcli.py -i /Users/zqzas/Downloads/MyLogerMailEnd.exe -vt -o /tmp

$ {“scan_id”: “eb2ba9d47c3a3c0120738069bc146de637497b60ab0d4152e582d80c136f1d68-1379835752″, “sha1″: “c83478bc431e936f36919c59103bd6ba845c8060″, “resource”: “eb2ba9d47c3a3c0120738069bc146de637497b60ab0d4152e582d80c136f1d68″, “response_code”: 1, “sha256″: “eb2ba9d47c3a3c0120738069bc146de637497b60ab0d4152e582d80c136f1d68″, “permalink”: “https://www.virustotal.com/file/eb2ba9d47c3a3c0120738069bc146de637497b60ab0d4152e582d80c136f1d68/analysis/1379835752/”, “md5″: “7d867d6bd5fc3015a31fdfa121ba9187″, “verbose_msg”: “Scan request successfully queued, come back later for the report”}

 

Then pop out a web page showing the results in a table and saying that your request has been queued.

QQ20130922-1

 

You might go to the permalink for further information later.   Like https://www.virustotal.com/file/eb2ba9d47c3a3c0120738069bc146de637497b60ab0d4152e582d80c136f1d68/analysis/1379835752/

According to the VirusTotal, it is absolutely a malicious binary file most likely a Trojan exe.

 

1. Test for a malicious website:

http://xa.jjhh.com/

Issue:

    $ sudo python ovizcli.py -i http://xa.jjhh.com/ -vt -o /tmp

Then the report  page will pop out: saying its detected as malware site by Google Safebrowsing, Sophos, and Fortinet.

The jsunpack-n results can be checked by changing “vt” to js in the command:

  $ sudo python ovizcli.py -i http://xa.jjhh.com/ -js -o /tmp

 

 

 

 

 

 

Network Analyzer Project Updates – Web UI

In this post I’ll introduce our simple Web UI prototype.

Before we open the browser we need to start 2 different scripts under ovizart-ng/bin/ directory. First one is the daemon service, which is a basically a small REST API providing HTTP  Server. To start;

./api_server.py start

This command will start a https server on localhost:9009 in order to change this values you can use this syntax;

api_server.py [-h] [--host HOST] [--port PORT] [--ssl] {start,stop,restart}

Second command is responsible for  starting Web UI, which is based on Django 1.5. To start;

./ui_server.py

this command will start Web UI on localhost:8000. Now we are ready; open a browser and on address bar write http://localhost:8000/.

This screen will show up for login and daemon settings. Before we move on, Daemon Options will be moved to a configuration file, for the ease of development and debugging I put those fields on the login form. These options should match with daemon parameters, for default parameters user do not need to change anything.

In order to login, a user must be created with create_user.py script under directory ./ovizart-ng/bin/. In our example the user and password is admin. This is not a default user account. Actually, system has no default users, one must be created right after installation.
WebUI-1

After login, (because of the first login) system does not contain any analysis. In order to start one click on the ‘New’ button on the left corner.

WebUI-2This screen needs some makeup, but it has some nice feature. For example besides uploading your pcap file you can upload your analyzers as well. so that you don’t need to have an account on the core machine to use your own analyzer. I’m well aware that this feature could be very dangerous. I’m planing to take 2 measures in order to improve security. First, improving user management by adding roles and rights, so that only certain users will have right to upload analyzers. Second one is sand-boxing. Running analyzer module in a sandbox will make this feature a little bit safer.

WebUI-3Select your pcap file to upload and click on ‘Upload & Start’ button. Your next screen will be this one;WebUI-4

After some time (system does not have a progressbar to show the current status of the evaluation), click on the ‘Browse’ button or refresh the page to see changed status of the analysis. If you want to delete an analysis click on the checkbox on the left side of the analysis and click on the ‘Delete’ button. This action can not be done, will delete each information, files, reports, etc. generated during that analysis.

WebUI-5Finished analyses have a summary on the rightmost column, number of packet, name of the pcap file and number of streams extracted from given pcap file. Clicking on ID will open the details screen.

WebUI-8At the top part we have the summary section which contains basic information about the given pcap file. The next section contains the information about the streams extracted from given pcap file. Stream list is a collapsible table. Each row of this table starts Application/Transport Layer protocol information. Then we have standard stream identifiers Source IP, Source Port, Destination IP, Destination Port. Number of Packets follows the indetifiers.

On rightmost column we can observe an icon of file and magnifier. File means that system extracted some file(s) from that specific stream. Magnifier means that system has analyzer reports for that specific stream’s extracted files.

WebUI-6 Clicking on a row will expand that row and show additional info about that stream.

  • Pcap file: clicking on the filename will start download of that stream specific pcap file.
  • Reassembled Traffic: Those links provide reconstructed application layer traffic in a file for further analysis/study/examination. You can see 3 different links. They are
    A -> B, this file contains all requests made by A.
    A <- B, this file contains all responses given by B
    A <-> B, this file contains whole request response pair between A and B.
    clicking on the links will start the download of those files.
  • Attachments: This section contains information about extracted files from that stream. On the right column you can see the mime-type of the extracted file as well. Clicking on the link will start the download of extracted file.
  • Analyzer Reports: Current system does have Virus Total and Cuckoo wrappers as analyzers. Clicking on those links will open a new tab for the results to see. Because of the limitations analyze results may take some time to be ready. Here is a sample screen-shot from virustotal.

WebUI-7This is our first prototype to show the infrastructure in a more user-friendly way :)

Cheers,
Gurcan

Network Analyzer Project Updates – Week 12

3 scripts added under ovizart-ng/bin/ directory to control ovizart system.

  • create user.py: A basic tool to create ovizart users to access the system.
    $ ./create_user.py 
    usage: create_user.py [-h]
                          <username> <password> <name> <surname>
                          <email@example.com>
    create_user.py: error: too few arguments
    
    $ ./create_user.py ggercek 123456 Gurcan GERCEK gurcangercek@gmail.com
    User created successfully.
    Now retrieving user for testing
    User retrieved successfully, user_id: 5239e1641e75ed0e0845d715
  • api_server.py: A basic Daemon management script, which trigger the Core & REST API system start/stop/restart
    $ ./api_server.py 
    usage: api_server daemon script [-h] [--host HOST] [--port PORT] [--ssl]
                                    {start,stop,restart}
    api_server daemon script: error: too few arguments
    
    $ ./api_server.py --host localhost --port 9009 --ssl start
    value: 9009
    $ ./api_server.py stop
    (No output if successfully terminated)

    –host HOST: specifies the binding address of the server. (default: localhost)
    –port PORT: specifies the binding port. (default: 9009)
    –ssl: specifies whether the server will run over http or https. (default: false)

  • ui_server.py.py: A basic start script for WEB UI. Use CTRL+C to stop execution.
    $ ./ui_server.py 
    Validating models...
    
    0 errors found
    September 18, 2013 - 14:24:09
    Django version 1.5.1, using settings 'web.settings'
    Development server is running at http://127.0.0.1:8000/
    Quit the server with CONTROL-C.

For the Web UI following items added.

  • Remove analysis option added: This feature will clean up everything ( reassembled traffic, extracted files, splitted pcap files, etc) related with that analysis
  • Listing details of streams
  • Dynamic Analyzer Loading: While uploading pcap files, users can upload their custom analyzers as well, so that it will be easier to extend the tool. But be cautious and use this feature at your own risk, I will plan to add sandboxing for this feature but that will take some time.

I will post the details of Web UI with screenshots.

Cheers,
Gurcan

Network Analyzer Project Updates (Hao Ma) – Week 12 – Testing Report

The Ovizart-ng is able to handle basically 4 types of input:

1. PCAP: use the core analyzer

2. URL: may call the extended analyzer like VirusTotal, and Jsunpack-n

3. binary file: VirusTotal and Cuckoo analyzer may handle

4. text file: (like html and javascript file): Jsunpack-n analyzer may handle.

HOWTO:

1. If you’d like to analyze a pcap, there’re 2 ways :

1). use cli tool of ovizart-ng:

Example:

$ sudo python ovizcli.py -i /Users/zqzas/Projects/ovizart-ng/test/pcap/test-http.pcap -o /tmp
I’m awesome
name: /Users/zqzas/Projects/ovizart-ng/test/pcap/test-http.pcap type: PCAP
Analysis Object{
id: 1
startTime: 2013-09-10 15:41:14.105801
user: <NoUserDefined>
config: <ovizconf.Config instance at 0x101b9a0e0>
status: FINISHED
data: [Data Object{
tags: {'data_source': 'PCAP', 'app_layer_protocol': 'HTTP', 'attachments': [('_Websidan_index.html', 'regular file', None)], None: ['_Websidan_index.html']}
data: {‘stream’: Stream Object {key: 6_10.1.1.101_3188_10.1.1.1_80, protocol: 6, srcIP: 10.1.1.101, srcPort: 3188, dstIP: 10.1.1.1, dstPort: 80, startTime: 1100903355.43, numberOfPacket: 14, pcapFile: /tmpanalysis_20130910_154114_105820/test-http.pcap/6_10.1.1.101_3188_10.1.1.1_80/6_10.1.1.101_3188_10.1.1.1_80.pcap}}
}, ……omitted

2) use interactive tool of ovizart-ng:

 

 

 

$cd shell/

$python ovizshell.py

(Cmd) set input = /Users/zqzas/Projects/ovizart-ng/test/pcap/test-http.pcap
(Cmd) set output = /tmp
(Cmd) show
{‘output’: ‘/tmp’, ‘external_tool’: ”, ‘verbose’: ”, ‘input’: ‘/Users/zqzas/Projects/ovizart-ng/test/pcap/test-http.pcap’}
(Cmd) start
name: /Users/zqzas/Projects/ovizart-ng/test/pcap/test-http.pcap type: PCAP
Analysis Object{
id: 1
startTime: 2013-09-10 15:20:15.713222
user: <NoUserDefined>
config: <ovizconf.Config instance at 0x101c99908>
status: FINISHED
data: ….omitted

 

2. To analyze a url:

(Cmd) set input = http://honeynet.org
(Cmd) set output = /tmp
(Cmd) set external_tool = -vt
(Cmd) start
name: http://honeynet.org type: URL
Virus-total analyzing …………………………
['http://honeynet.org']
——————-
{“permalink”: “https://www.virustotal.com/url/7547b57712941e07a6f9f786a6f311b534c94c0e2ba59126d7f1ef4ff24866e4/analysis/1377971788/”, “url”: “http://honeynet.org/”, “response_code”: 1, “scan_date”: “2013-08-31 17:56:28″, “scan_id”: “7547b57712941e07a6f9f786a6f311b534c94c0e2ba59126d7f1ef4ff24866e4-1377971788″,….omitted

set to another external analyzer, “jsunpack-n” :

(Cmd) set external_tool = -js
(Cmd) show
{‘output’: ‘/tmp’, ‘external_tool’: ‘-js’, ‘verbose’: ”, ‘input’: ‘http://honeynet.org’}
(Cmd) start
name: http://honeynet.org type: URL
Jsunpack-n analyzing …………………………

http://honeynet.org

!!! /Users/zqzas/Projects/ovizart-ng/analyzer/jsunpack_n/jsunpack-n-read-only
The key / has the following output in recursive mode
[nothing detected] /
info: [0] no JavaScript
file: stream_bf9b49684b9623595fbb8e12648d3d19ecb5c77c: 19 bytes
Note that none of the files are actually created since self.outdir is empty.
Instead, you could go through each url and look at the decodings that it creates
Looking at key /, has 1 files and 1 messages, that follow:
file type=stream, hash=bf9b49684b9623595fbb8e12648d3d19ecb5c77c, data=19 bytes
output message printable=1, impact=0, msg=[0] no JavaScript

Response:
[['The reports has been saved in /Users/zqzas/Projects/ovizart-ng/analyzer/jsunpack_n/jsunpack-n-read-only/log.'], []]

 

 

 

 

2. To analyze a binary:

1) VirusTotal:

(Cmd) set input = /Users/zqzas/Downloads/anyexe.exe
(Cmd) set output = /tmp
(Cmd) set external_tool = -vt
(Cmd) start
name: /Users/zqzas/Downloads/anyexe.exe type: BINARY
Virus-total analyzing …………………………

{“scan_id”: “209342a2755315c7cef091f4f56de0875ee9cafee73814c05faf5db1a3955ee4-1378802153″, “sha1″: “5d92013fe866395a1c5370192d9ad83e88328a64″, “resource”: “209342a2755315c7cef091f4f56de0875ee9cafee73814c05faf5db1a3955ee4″, “response_code”: 1, “sha256″: “209342a2755315c7cef091f4f56de0875ee9cafee73814c05faf5db1a3955ee4″, “permalink”: “https://www.virustotal.com/file/209342a2755315c7cef091f4f56de0875ee9cafee73814c05faf5db1a3955ee4/analysis/1378802153/”, “md5″: “fb086841437211545b5260209fa9ecf7″, “verbose_msg”: “Scan request successfully queued, come back later for the report“}

2)Cuckoo:

(Cmd) set input = /Users/zqzas/Downloads/anyexe.exe
(Cmd) set output = /tmp
(Cmd) set external_tool = -ck
(Cmd) start
name: /Users/zqzas/Downloads/anyexe.exe type: BINARY
Cuckoo analyzing …………………………
You may check the reports at: ( http://81.167.148.242:8090/tasks/view/202 ) after it’s available.

 

 

4. text file, a html file with js:

(Cmd) set input = /Users/zqzas/Projects/ovizart-ng/shell/report.html
(Cmd) set output = /tmp
(Cmd) set external_tool = -js
(Cmd) start
name: /Users/zqzas/Projects/ovizart-ng/shell/report.html type: PLAINTEXT
Jsunpack-n analyzing …………………………
/Users/zqzas/Projects/ovizart-ng/shell/report.html
!!! /Users/zqzas/Projects/ovizart-ng/analyzer/jsunpack_n/jsunpack-n-read-only
The key / has the following output in recursive mode
[nothing detected] /
info: [0] no JavaScript
file: stream_e4a62c83ace44261a545060c454a6c6fd3c677f1: 50 bytes
Note that none of the files are actually created since self.outdir is empty.
Instead, you could go through each url and look at the decodings that it creates
Looking at key /, has 1 files and 1 messages, that follow:
file type=stream, hash=e4a62c83ace44261a545060c454a6c6fd3c677f1, data=50 bytes
output message printable=1, impact=0, msg=[0] no JavaScript
Response:
[[], ['The reports has been saved in /Users/zqzas/Projects/ovizart-ng/analyzer/jsunpack_n/jsunpack-n-read-only/log.']]

Above are the cases of using interactive shell, which can be achieved by ovizcli.py equivalently as well.