HTTP::GetImages - Spider to recover and store images from web pages.
use HTTP::GetImages; $_ = new HTTP::GetImages ( dir => '.', todo => ['http://www.google.com/',], dont => ['http://www.somewhere/ignorethis.html','http://and.this.html'], chat => 1, ); $_->print_imgs; $_->print_done; $_->print_failed; $_->print_ignored; my $hash = $_->imgs_as_hash; foreach (keys %{$hash}){ warn "$_ = ",$hash->{$_},"\n"; } exit;
This module allow syou to automate the searching, recovery and local storage of images from the web, including those linked by anchor (A), mage (IMG) and image map (AREA) elements.
A
IMG
AREA
Supply a URI or list of URIs to process, and HTTP::GetImages will recurse over every link it finds, searching for images.
HTTP::GetImages
By supplying a list of URIs, you can restrict the search to certain webservers and directories, or exclude it from certain webservers and directories.
You can also decide to reject images that are too small or too large.
LWP::UserAgent; HTTP::Request; HTML::TokeParser;
Set to above zero if you'd like a real-time report to STDERR. Defaults to off.
STDERR
Besides the class reference, accepts name=>value pairs:
The maximum attempts the agent should make to access the site. Default is three.
the path to the directory in which to store images (no trailing oblique necessary);
Default value is 0, which allows images to be saved with their original names. If set with a value of 1, images will be given new names based on the time they were saved at. If set to 2, images will be given filenames according to their source location.
one or more URL to process: can be an anonymous array, array reference, or scalar.
As todo, above, but URLs should be ignored.
todo
If one of these is ALL, then will ignore all HTML documents that do not match exactly those in the todo array of URLs to process. If one of these is NONE, will ignore no documents.
ALL
NONE
A regular expression 'or' list of image extensions to match.
Will be applied at the end of a filename, after a point, and is insensitive to case.
Defaults to (jpg|jpeg|bmp|gif|png|xbm|xmp).
(jpg|jpeg|bmp|gif|png|xbm|xmp)
As ext_ok (above), but default value is:(wmv|avi|rm|mpg|asf|ram|asx|mpeg|mp3)
ext_ok
(wmv|avi|rm|mpg|asf|ram|asx|mpeg|mp3)
The minimum path a URL must contain. This can be a scalar or an array reference.
The minimum size an image can be if it is to be saved.
The maximum size an image can be if it is to be saved.
The object has several private variables, which you can access for the results when the job is done. However, do check out the public methods for accessing these.
a hash keys of which are the original URLs of the images, value being are the local filenames.
a hash, keys of which are the failed URLs, values being short reasons.
Print a list of the images saved.
Returns a reference to a hash of images saved, where keys are new image locations, values are original locations.
Print a list of the URLs accessed and return a reference to a hash of the same.
Print a list of the URLs failed, and reasons and return a reference to a hash of the same.
Print a list of the URLs ignored and return a reference to a hash of the same.
Every thing and every one listed above under DEPENDENCIES.
Version 0.34*, updates by Lee Goddard:
Re-implemented the dont = ['ALL']> feature that got lost during the redesign of the API; agent now makes multiple attempts to get the image.
dont =
Version 0.32, updates by Lee Goddard: fixed bugs.
Version 0.31, updates by Lee Goddard: added 'max_size'.
Version 0.3, updates by Lee Goddard:
Made it a nicer API and tidied up some coding and added a couple of methods. Started to add tests.
Version 0.25, updates by Duncan Lamb and Lee Goddard:
The character ~ in the URL would confuse the abs_url subroutine, resolving http://www.o.com/~home/page.html to http://www.o.com. It doesn't any more.
~
abs_url
http://www.o.com/~home/page.html
http://www.o.com
Double obliques in a link would cause an endless loop - no longer.
A link refrencing its own directory with ./ would also cause an endless loop - but no more.
./
EXTENSIONS_BAD list added.
EXTENSIONS_BAD
NEWNAMES updated.
NEWNAMES
Frame parsing.
Multiple minimum-paths for URLs added.
GetImages.pm is proud to be part of Duncan Lamb's HTTP::StegTest:
GetImages.pm
HTTP::StegTest
An example report can be found at http://64.192.146.9/ in which the library was run against several anti-American and "pro-Taliban" sites. The reports display images that changed between collections, images that tested positive for being altered by an outside program, and images which were "false positives." Over 25,000 images were tested across 10 sites.
Lee Goddard (LGoddard@CPAN.org) 05/05/2001 16:08 ff.
With updates and fixes from Duncan Lamb (duncan_lamb@hotmail.com), 12/2001.
Copyright 2000-2001 Lee Goddard.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
1 POD Error
The following errors were encountered while parsing the POD:
You forgot a '=back' before '=head2'
To install HTTP::GetImages, copy and paste the appropriate command in to your terminal.
cpanm
cpanm HTTP::GetImages
CPAN shell
perl -MCPAN -e shell install HTTP::GetImages
For more information on module installation, please visit the detailed CPAN module installation guide.