WWW::CheckSite::Spider - A base class for spidering the web
use WWW::CheckSite::Spider; my $sp = WWW::CheckSite::Spider->new( uri => 'http://www.test-smoke.org', ); while ( my $page = $sp->get_page ) { # $page is a hashref with basic information }
or to spider a site behind HTTP basic authentication:
package BA_Mech; use base 'WWW::Mechanize'; sub get_basic_credentials { ( 'abeltje', '********' ) } package main; use WWW::CheckSite::Spider; my $sp = WWW::CheckSite::Spider->new( ua_class => 'BA_Mech', uri => 'http://your.site.with.ba/', ); while ( my $page = $sp->get_page ) { # $page is a hashref with basic information }
This module implements a basic web-spider, based on WWW::Mechanize. It takes care of putting pages on the "still-to-fetch" stack. Only uri's with the same origin will be stacked, taking the robots-rules on the server into account.
WWW::Mechanize
The following constants ar exported on demand with the :const tag.
Currently supported options (the rest will be set but not used!):
uri => <start_uri> [mandatory]
ua_class => by default WWW::Mechanize
exclude => <exclude_re> (qr/[#?].*$/)
myrules => <\@disallow>
lang => languages to pass to Accept-Language: header
Fetch the page and do some book keeping. It returns the result of $pider->process_page().
$pider->process_page()
Override this method to make the spider do something useful. By default it returns:
org_uri Used for the request
ret_uri The uri returned by the server
depth The depth in the browse tree
status The return status from server
success shortcut for status == 200
is_html shortcut for ct eq 'text/html'
title What's in the <TITLE></TITLE> section
ct The content-type
Filter out the uri's that will fail:
qr!^(?:mailto:|mms://|javascript:)!i
Return the URI to be spidered or undef for skipping.
undef
Retruns a standard name for this UserAgent.
Initialise the agent that is used to fetch pages. The default class is WWW::Mechanize but any class that has the same methods will do.
The ua_class needs to support the following methods (see WWW::Mechanize for more information about these):
ua_class
Return the current user agent.
Create a new agent and return it.
The Spider uses the robot rules mechanism. This means that it will always get the /robots.txt file from the root of the webserver to see if we are allowed (actually "not disallowed") to access pages as a robot.
You can add rules for disallowing pages by specifying a list of lines in the robots.txt syntax to @{ $self->{myrules} }.
@{ $self->{myrules} }
This will determine whether this uri should be spidered. Rules are simple:
Has the same base uri as the one we started with
Is not excluded by the $self->{exclude} regex.
$self->{exclude}
Is not excluded by robots.txt mechanism
Checks the uri against the robotrules.
This will setup a <WWW::RobotRules> object. @{$self->{myrules } is used to add rules and should be in the RobotRules format. These rules are added to the ones found in robots.txt.
@{$self->{myrules }
Returns the current RobotRules object.
Abe Timmerman, <abeltje@cpan.org>
<abeltje@cpan.org>
Please report any bugs or feature requests to bug-www-checksite@rt.cpan.org, or through the web interface at http://rt.cpan.org. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
bug-www-checksite@rt.cpan.org
Copyright MMV Abe Timmerman, All Rights Reserved.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
To install WWW::CheckSite, copy and paste the appropriate command in to your terminal.
cpanm
cpanm WWW::CheckSite
CPAN shell
perl -MCPAN -e shell install WWW::CheckSite
For more information on module installation, please visit the detailed CPAN module installation guide.