The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

HTML::TagFilter - An HTML::Parser-based selective tag remover

SYNOPSIS

    use HTML::TagFilter;
    my $tf = new HTML::TagFilter;
    my $clean_html = $tf->filter($dirty_html);
    
    # or
    
    my $tf = HTML::TagFilter->new(
        allow=>{...}, 
        deny=>{...}, 
        log_rejects => 1, 
        strip_comments => 1, 
        echo => 1,
        skip_xss_protection => 1,
    );
    
    $tf->parse($some_html);
    $tf->parse($more_html);
    my $clean_html = $tf->output;
    my $cleaning_summary = $tf->report;
    my @tags_removed = $tf->report;
    my $error_log = $tf->error;

DESCRIPTION

HTML::TagFilter is a subclass of HTML::Parser with a single purpose: it will remove unwanted html tags and attributes from a piece of text. It can act in a more or less fine-grained way - you can specify permitted tags, permitted attributes of each tag, and permitted values for each attribute in as much detail as you like.

Tags which are not allowed are removed. Tags which are allowed are trimmed down to only the attributes which are allowed for each tag. It is possible to allow all or no attributes from a tag, or to allow all or no values for an attribute, and so on.

(As of version 0.73 the filter will also guard against some common cross-site scripting attacks, unless you tell it not to).

The original purpose for this was to screen user input. In that setting you'll often find that just using:

    my $tf = new HTML::TagFilter;
    put_in_database($tf->filter($my_text));

will do. However, it can also be used for display processes (eg text-only translation) or cleanup (eg removal of old javascript). In those cases you'll probably want to override the default rule set with a small number of denial rules.

    my $filter = HTML::TagFilter->new(deny => {img => {'all'}});
    print $tf->filter($my_text);

Will strip out all images, for example, but leave everything else untouched.

nb (faq #1) the filter only removes the tags themselves: all it does to text which is not part of a tag is to escape the <s and >s, to guard against false negatives and some common cross-site attacks.

CONFIGURATION: RULES

Creating the rule set is fairly simple. You have three options:

use the defaults

which will produce safe but still formatted html, without images, tables, javascript or much else apart from inline text formatting and links.

selectively override the defaults

use the allow_tags and deny_tags methods to pass in one or more tag settings. eg:

    $filter->allow_tags({ p => { class=> ['lurid','sombre','plain']} });

will mean that all attributes other than class="lurid|sombre|plain" will be removed from <p> tags. See below for more about specifying rules.

supply your own configuration

To override the defaults completely, supply the constructor with some rules:

    my $filter = HTML::TagFilter->new( allow=>{ p => { class=> ['lurid','sombre','plain']} });

Only the rules you supply in this form will be applied. You can achieve the same thing after construction by first clearing the rule set:

    my $filter = HTML::TagFilter->new();
    $filter->allow_tags();
    $filter->allow_tags({ p => { align=> ['left','right','center']} });

Future versions are intended to offer a more sophisticated rule system, allowing you to specify combinations of attributes, ranges for values and generally match names in a more fuzzy way.

I'm also considering adding a set of standard filters for, eg, image or javascript removal. I'd be glad to hear suggestions.

The simple hash interface will continue to work for the foreseeable future, though.

CONFIGURATION: BEHAVIOURS

There are currently four switches that will change the behaviour of the filter. They're supplied at construction time alongside any rules you care to specify. All of them default to 'off'.

    my $tf = HTML::TagFilter->new(
        log_rejects => 1,
        strip_comments => 1,
        echo => 1,
        skip_xss_protection => 1,
    );
    
log_rejects

Set log to something true and the filter will keep a detailed log of all the tags it removes. The log can be retrieved by calling report(), which will return a summary in scalar context and a detailed AoH in list.

echo

Set echo to 1, or anything true, and the output of the filter will be sent straight to STDOUT. Otherwise the filter is silent until you call output().

strip_comments

Set strip_comments to 1 and comments will be stripped. If you don't, they won't.

skip_xss_protection

Unless you set skip_xss_protection to 1, the filter will postprocess some of its output to protect against common cross-site scripting attacks.

It will entify any < and > in non-tag text, entify quotes in attribute values (the Parser will have unencoded them) and strip out values for vulnerable attributes if they don't look suitably like urls. By default these attributes are checked: src, lowsrc, href, background and cite. You can replace that list (not extend it) at any time:

    $filter->xss_risky_attributes( qw(your list of attributes) );

RULES

Each element is tested as it is encountered, in two stages:

tag filter

Just checks that this tag is permitted, and blocks the whole thing if not. Applied to both opening and closing tags.

attribute filter

Any tag that passes the tag filter will remain in the text, but the attribute filter will strip out of it any attributes that are not permitted, or which have values that are not permitted for that tag/attribute combination.

format for rules

There are two kinds of rule: permissions and denials. They work as you'd expect, and can coexist, but they're not quite symmetrical. Denial rules are intended to complement permission rules, so that they can provide a kind of compound 'unless'.

* If there are any 'permission' rules, then everything that doesn't satisfy any of them is eliminated.

* If there are any 'deny' rules, then anything that satisfies any of them is eliminated.

* If there are both denial and permission rules, then everything either satisfies a denial rule or fails to satisfy any of the permission rules is eliminated.

* If there is neither kind, we strip out everything just to be on the safe side.

The two most likely setups are

1. a full set of permission rules and maybe a couple of denial rules to eliminate pet hates.

2. no permission rules at all and a small set of denial rules to remove particular tags.

Rules are passed in as a HoHoL:

    { tag name->{attribute name}->[valuelist] }

There are three reserved words: 'any and 'none' stand respectively for 'anything is permitted' and 'nothing is permitted', or if in denial: 'anything is removed' and 'nothing is removed'. 'all' is only used in denial rules and it indicates that the whole tag should be stripped out: see below for an explanation and some mumbled excuses.

For example:

    $filter->allow_tags({ p => { any => [] });

Will permit <p> tags with any attributes. For clarity's sake it may be shortened to:

    $filter->allow_tags({ p => { 'any' });

but note that you'll get a warning about the odd number of hash elements if -w is on, and in the absence of the => the quotes are required. And

    $filter->allow_tags({ p => { none => [] });

Will allow <p> tags to remain in the text, but all attributes will be removed. The same rules apply at all levels in the tag/attribute/value hierarchy, so you can say things like:

    $filter->allow_tags({ any => { align => [qw(left center right)] });
    $filter->allow_tags({ p => { align => ['any'] });

examples

To indicate that a link destination is ok and you don't mind what value it takes:

    $filter->allow_tags({ a => { 'href' } });

To limit the values an attribute can take:

    $filter->allow_tags({ a => { class => [qw(big small middling)] } });

To clear all permissions:

    $filter->allow_tags({});

To remove all onClicks from links but allow all targets:

    $filter->allow_tags({ a => { onClick => ['none'], target => [], } });

You can combine allows and denies to create 'unless' rules:

    $filter->allow_tags({ a => { any => [] } });
    $filter->deny_tags({ a => { onClick => [] } });

Will remove only the onClick attribute of a link, allowing everything else through. If this was your only purpose, you could achieve the same thing just with the denial rule and an empty permission set, but if there's other stuff going on then you probably need this combination.

order of application

denial rules are applied first. we take out whatever you specify in deny, then take out whatever you don't specify in allow, unless the allow set is empty, in which case we ignore it. If both sets are empty, no tags gets through.

(We prefer to err on the side of less markup, but I expect this will be configurable soon.)

oddities

Only one deliberate one, so far. The main asymmetry between permission and denial rules is that from

    allow_tags->{ p => {...}}

it follows that p tags are permitted, but the reverse is not true:

    deny_tags->{ p => {...}}

doesn't imply that p tags are removed, just that the relevant attributes are removed from them. If you want to use a denial rule to eliminate a whole tag, you have to say so explicitly:

    deny_tags->{ p => {'all'}}

will remove every <p> tag, whereas

    deny_tags->{ p => {'any'}}

will just remove all the attributes from <p> tags. Not very pretty, I know. It's likely to change, but probably not until after we've invented a system for supplying rules in a more readable format.

METHODS

HTML::TagFilter->new();

If called without parameters, loads the default set. Otherwise loads the rules you supply. For the rule format, see above.

$tf->filter($html);

Exactly equivalent to:

    $tf->parse($html);
    $tf->output();

but more useful, because it'll fit in a oneliner. eg:

    print $tf->filter( $pages{$_} ) for keys %pages;
    

Note that calling filter() will clear anything that was waiting in the output buffer, and will clear the buffer again when it's finished. it's meant to be a one-shot operation and doesn't co-operate well. use parse() and output() if you want to daisychain.

$filter->parse($text);

The parse method is inherited from HTML::Parser, but most of its normal behaviours are subclassed here and the output they normally print is kept for later. The other configuration options that HTML::Parser normally offers are not passed on, at the moment, nor can you override the handler definitions in this module.

$filter->output()

calls $filter->eof, returns and clears the output buffer. This will conclude the processing of your text, but you can of course pass a new piece of text to the same parser object and begin again.

$filter->report()

if called in list context, returns the array of rejected tag/attribute/value combinations. in scalar context returns a more or less readable summary. returns () if logging not enabled. Clears the log.

$filter->allow_tags($hashref)

Takes a hashref of permissions and adds them to what we already have, replacing at the tag level where rules are already defined. In other words, you can add a tag to the existing set, but to add an attribute to an existing tag you have to specify the whole set of attribute permissions. If no rules are sent, this clears the permission rule set.

$filter->deny_tags($hashref)

likewise but sets up (or clears) denial rules.

$filter->allows()

Returns the full set of permissions as a HoHoho. Can't be set this way: ust a utility function in case you want to either display the rule set, or send it back to allow_tags in a modified form.

$filter->denies()

Likewise for denial rules.

$filter->error()

Returns an error report of currently dubious usefulness.

TO DO

More sanity checks on incoming rules

Simpler rule-definition interface

Complex rules. The long term goal is that someone can supply a rule like "remove all images where height or width is missing" or "change all font tags where size="2" to <span class="small">. Which will be hard. For a start, HTML::Parser doesn't see paired start and close tags, which would be required for conditional actions.

An option to speed up operations by working only at the tag level and using HTML::Parser's built-in screens.

REQUIRES

HTML::Parser

SEE ALSO

HTML::Parser

AUTHOR

William Ross, wross@cpan.org

COPYRIGHT

Copyright 2001-3 William Ross

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

Please use https://rt.cpan.org/ to report bugs & omissions, describe cross-site attacks that get through, or suggest improvements.