HTML::RobotsMETA - Parse HTML For Robots Exclusion META Markup
use HTML::RobotsMETA; my $p = HTML::RobotsMETA->new; my $r = $p->parse_rules($html); if ($r->can_follow) { # follow links here! } else { # can't follow... }
HTML::RobotsMETA is a simple HTML::Parser subclass that extracts robots exclusion information from meta tags. There's not much more to it ;)
Currently HTML::RobotsMETA understands the following directives:
Creates a new HTML::RobotsMETA parser. Takes no arguments
Parses an HTML string for META tags, and returns an instance of HTML::RobotsMETA::Rules object, which you can use in conditionals later
Returns the HTML::Parser instance to use.
Returns callback specs to be used in HTML::Parser constructor.
Tags that specify the crawler name (e.g. <META NAME="Googlebot">) are not handled yet.
There also might be more obscure directives that I'm not aware of.
Copyright (c) 2007 Daisuke Maki <daisuke@endeworks.jp>
HTML::RobotsMETA::Rules HTML::Parser
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See http://www.perl.com/perl/misc/Artistic.html
To install HTML::RobotsMETA, copy and paste the appropriate command in to your terminal.
cpanm
cpanm HTML::RobotsMETA
CPAN shell
perl -MCPAN -e shell install HTML::RobotsMETA
For more information on module installation, please visit the detailed CPAN module installation guide.