If you have searched for something using Yahoo, Google or a business directory on the web, you probably noticed that the results of your search usually have the same logical structure. Happy Harvester can extract this structured data and store it in a delimited file format. Once your data has been saved, you can import the data in Microsoft Excel or any other database program.
Before you can extract the data, you have to analyse the HTML source you want to harvest. You can tell the Happy Harvester to extract data between two HTML strings. The (text) data in between will be stored. You can create multiple selection sets and extract for example titles, prices, stock info, etc. With this you could for example query a music CD store for artists, album titles and prices or obtain stock info on a regular basis.
For crawling multiple web addresses you can import a list of url's. Or you can use the url-generator to build a range. This last option can be useful if there is a logical structure in the url using numbers. Many business directories, web-forums and search engine results are using a structured url for querying their databases. Some example profiles are included with Happy Harvester for your inspiration.
Features:
- Harvest all sort of data from web sites.
- Simple and intuitive user interface.
- Automaticly browse pages with a next page definition.
- Advanced URL generator.
- Harvest local html and text files.
- Support for sites with basic authentication.
- Http Post and cookie management support.
- Export data in Excel or CSV format.
- Harvest master and detail pages.
- Advanced mode with script rules for complex site structures.
- Command-line options for automatic scheduling.
Keywords: screen scraper, web crawler, web scraper, web extractor, html parser, web grabber, data extaction, web content monitoring, data integration