17 October 2008 | 14,309 views

Web-Harvest – Web Data Extraction Tool

Check For Vulnerabilities with Acunetix

Web-Harvest is Open Source Web Data Extraction tool written in Java. It offers a way to collect desired Web pages and extract useful data from them. In order to do that, it leverages well established techniques and technologies for text/xml manipulation such as XSLT, XQuery and Regular Expressions. Web-Harvest mainly focuses on HTML/XML based web sites which still make vast majority of the Web content. On the other hand, it could be easily supplemented by custom Java libraries in order to augment its extraction capabilities.

Process of extracting data from Web pages is also referred as Web Scraping or Web Data Mining. World Wide Web, as the largest database, often contains various data that we would like to consume for our needs. The problem is that this data is in most cases mixed together with formatting code – that way making human-friendly, but not machine-friendly content. Doing manual copy-paste is error prone, tedious and sometimes even impossible. Web software designers usually discuss how to make clean separation between content and style, using various frameworks and design patterns in order to achieve that. Anyway, some kind of merge occurs usually at the server side, so that the bunch of HTML is delivered to the web client.

Every Web site and every Web page is composed using some logic. It is therefore needed to describe reverse process – how to fetch desired data from the mixed content. Every extraction procedure in Web-Harvest is user-defined through XML-based configuration files. Each configuration file describes sequence of processors executing some common task in order to accomplish the final goal. Processors execute in the form of pipeline. Thus, the output of one processor execution is input to another one. This can be best explained using the simple configuration fragment:

When Web-Harvest executes this part of configuration, the following steps occur:

  1. http processor downloads content from the specified URL.
  2. html-to-xml processor cleans up that HTML producing XHTML content.
  3. xpath processor searches specific links in XHTML from previous step giving URL sequence as a result.

Web-Harvest supports a set of useful processors for variable manipulation, conditional branching, looping, functions, file operations, HTML and XML processing, exception handling. See User manual for technical description of provided processors.

You can download Web-Harvest 1.0 here:

webharvest1-exe.zip

Or read more here.



Recent in General Hacking:
- Kali Linux – The Most Advanced Penetration Testing Linux Distribution
- Microsoft Says You SHOULD Re-use Passwords Across Sites
- Dradis v2.9 – Information Sharing For Security Assessments

Related Posts:
- Honeysnap – Pcap Packet Capture File Parsing Tool
- NetworkMiner v1.1 Released – Windows Packet Analyzer & Sniffer
- bsqlbf v2.3 Released – Blind SQL Injection Brute Forcing Tool

Most Read in General Hacking:
- 10 Best Security Live CD Distros (Pen-Test, Forensics & Recovery) - 1,139,333 views
- Hack Tools/Exploits - 583,438 views
- Password Cracking with Rainbowcrack and Rainbow Tables - 415,318 views

Advertise on Darknet

2 Responses to “Web-Harvest – Web Data Extraction Tool”

  1. navin 17 October 2008 at 1:36 pm Permalink

    Web harvest is a pretty nice implementaion of extraction methods I’d read about before…….a definite weekend project

  2. patate 18 October 2008 at 6:08 pm Permalink

    I’ve been using web harvest for the past year or so for random tasks and it does the job very well! Its XML syntax is not so heavy to learn. Their IDE tool is definitely a must for the application, otherwise its a pain to go back and forth their doc.

    I’ve run in to some memory issues where i had to augment the jvm’s memory, but otherwise its a very stable tool.

    I wish that project would get more attention!! Thanks for the post Darknet!