The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.
NAME
    File::CacheDir - Perl module to aid in keeping track and cleaning up
    files, quickly and without a cron $Id: CacheDir.pm,v 1.11 2003/09/09
    21:23:52 earl Exp $

DESCRIPTION
    CacheDir attempts to keep files around for some ttl, while quickly,
    automatically and cronlessly cleaning up files that are too old.

ARGUMENTS
    The possible named arguments (which can be named or sent in a hash ref,
    see below for an example) are,

    base_dir - the base directory default is '/tmp/cache_dir'

    carry_forward - whether or not to move forward the file when time
    periods get crossed. For example, if your ttl is 3600, and you move from
    the 278711 to the 278712 hour, if carry forward is set, it will refresh
    a cookie (if set_cookie is true) and move the file to the new location,
    and set $self->{carried_forward} = 1 default is 1

    cleanup_suffix - in order to avoid having more than one process attempt
    cleanup, a touch file, that looks like this
    "$cleanup_dir$self->{cleanup_suffix}" is created and cleaned up
    cleanup_fork - fork on cleanup default is 1 cleanup_frequency -
    percentage of time to attempt cleanup cleanup_length - seconds to allow
    for cleanup, that is, how old a touch file can be before a new cleanup
    process will start

    content_typed - whether or not you have printed a Content-type header
    default is 0

    cookie_brick_over - brick over an old cookie default is 0 cookie_name -
    the name of your cookie default is 'cache_dir'

    cookie_path - the path for your cookie default is '/'

    filename - what you want the file to be named (not including the
    directory), like "storebuilder" . time . $$ I would suggest using a
    script specific word (like the name of the cgi), time and $$ (which is
    the pid number) in the filename, just so files are easy to track and the
    filenames are pretty unique default is time . $$

    periods_to_keep - how many old periods you would like to keep

    set_cookie - whether or not to set a cookie default is 0

    ttl - how long you want the file to stick around can be given in seconds
    (3600) or like "1 hour" or "1 day" or even "1 week" default is '1 day'

COOKIES
    Since CacheDir fits in so nicely with cookies, I use a few CGI methods
    to automatically set cookies, retrieve the cookies, and use the cookies
    when applicable. The cookie methods make it near trivial to handle
    session information. Taking the advice of Rob Brown
    <rbrown@about-inc.com>, I use CGI.pm, though it increases load time and
    nearly doubles out of the box memory required.

    The cookie that gets set is the full path of the file with your base_dir
    swapped out. This makes it nice for users to not know full path to your
    files. The filename that gets returned from a cache_dir call, however is
    the full path.

METHOD OVERRIDES
    Most of the time, the defaults will suffice, but using your own object
    methods, you can override most everything CacheDir does. To show which
    methods are used, I walk through the code with a simple example.

    my $cache_dir = File::CacheDir->new({ base_dir => '/tmp/example', ttl =>
    '2 hours', filename => 'example.' . time . ".$$", });

    An object gets created, with the hash passed getting blessed in.

    my $filename = $cache_dir->cache_dir;

    The ttl gets converted to seconds, here 7200. The

    $ttl_dir = $base_dir . $ttl;

    In our example, $ttl_dir = "/tmp/example/7200";

    $self->ttl_mkpath - if the ttl directory does not exist, it gets made

    Next, the number of ttl units since epoch, here it is something like
    137738. This is

    $self->{int_time} = int(time/$self->{ttl});

    Now, the full directory can be formed

    $self->{full_dir} = $ttl_dir . $self->{int_time};

    If $self->{full_dir} exists, $self->{full_dir} . $self->{filename} gets
    returned. Otherwise, I look through the $ttl_dir, and for each directory
    that is too old (more than $self->{periods_to_keep}) I run

    $self->cleanup - just deletes the old directory, but this is where a
    backup could take place, or whatever you like.

    Finally, I

    $self->sub_mkdir - makes the new directory, $self->{full_dir}

    and return the $filename

SYNOPSIS
      #!/usr/bin/perl -w

      use strict;
      use File::CacheDir qw(cache_dir);

      my $filename = cache_dir({
        base_dir => '/tmp',
        ttl      => '2 hours',
        filename => 'example.' . time . ".$$",
      });

      `touch $filename`;

THANKS
    Thanks to Rob Brown for discussing general concepts, helping me think
    through things, offering suggestions and doing the most recent code
    review. The idea for carry_forward was pretty well all Rob. I didn't see
    a need, but Rob convinced me of one. Since Rob first introduced the idea
    to me, I have seen CacheDir break three different programmers' code.
    With carry_forward, no problems. Finally, Rob changed my non-CGI cookie
    stuff to use CGI, thus avoiding many a flame war. Rob also recently
    wrote a taint clean version of rmtree. He also wrote an original version
    of strong_fork, recently adopted here, and got my logic right on
    fork'ing and exit'ing.

    Thanks to Paul Seamons for listening to my ideas, offerings suggestions,
    using CacheDir and giving feedback. Using the namespace File::CacheDir,
    the case of CacheDir and cache_dir are all from Paul. Paul helped me cut
    down strong_fork to what we actually need here. Finally, thanks to Paul
    for the idea of this THANKS section.

    Thanks to Wes Cerny for using CacheDir, and giving feedback. Also,
    thanks to Wes for a last minute code review. Wes had me change the
    existence check on $self->{carry_forward_filename} to a plain file check
    based on his experience with CacheDir.

    Thanks to Allen Bettilyon for discovering some problems with the cleanup
    scheme. Allen had the ideas of using touch files and cleanup_frequency
    to avoid concurrent clean ups. He also convinced me to use
    perhaps_cleanup to allow for backward compatibility with stuff that
    might be using cleanup.