WWW::Webrobot::pod::Recur - Interface for traversing a site / generating multiple urls
Usage in a test plan:
<request> <method value='GET'/> <url value='http://server.org/start.html'/> <recurse> <WWW.Webrobot.Recur.LinkChecker> <and> <url value="http://server.org/"/> <scheme value="http"/> <not><url value="logout\.jsp"/></not> <not><url value="logout\.do"/></not> <not><url value="setUserLocale.do"/></not> </and> </WWW.Webrobot.Recur.LinkChecker> </recurse> <description value='Check site'/> </request>
This interface allows you to visit new urls starting from an url given in a testplan.
If you want to write a recursion class you must implement the following methods:
($newurl, $caller_pages) = $recurse -> next($r);
$newurl is the next url to visit and
$caller_pages is a list of values. This list should indicate where
$newurl has been found.
If you want to stop traversing than return
It takes a string url as argument and must return 1 if this url is allowed and 0 if it is not allowed.
If you wonder why this method is needed - here is an explanation: Returning allowed URLs by
next is not sufficient because URLs may be redirected via 3xx codes. This redirection is done automatically by WWW::Webrobot via HTTP::UserAgent and must therefore be savely excluded in HTTP::UserAgent. That's where the
allowed method is used.
Follow all frames and images.
Follow all frames, images and links you can get.
Follow links randomly.
Copyright(c) 2004 ABAS Software AG
This software is licensed under the perl license, see LICENSE file.