Tie::Hash::Cache::MRU - a simple MRU cache with a TIEHASH interface
use Tie::Hash::Cache::MRU; # when the expensive function is in another tied hash: tie my %cache1, HASH => \%AnotherTiedHash, SIZE => 100, CLEAR => 0; # or use sort of like Memoize tie my %cache2, SIZE => 100, FETCH => \&SomethingTimeConsuming;
Create a tied hash interface that memoizes only so many entries.
Expiry is obtained by keeping two cache hashes, and throwing out the old one when the new one gets more than SIZE buckets filled. this is crude but effectively avoids all the bookkeeping that fancier expiration mechanisms need to do.
The following named parameters are recognized at
up to twice the SIZE of data are kept in the cache. Stored cache size will average at 1.5 * SIZE, since the old cache is thrown away all at once when we've got SIZE elements in the new cache.
maximum number of seconds that an element can be cached, before we look it up again. This defaults to
undef which means infinite life. When LIFE is defined, additional storage is used to track the age of the cached elements.
when a HASH parameter is provided, the given hashref will be referred to for operations not specifically overridden.
When coderefs are provided for these parameters, they will be used for look-ups and write-throughs and so on. The object parameter will not be provided, so methods taken directly from tiehash code will not work.
CLEAR can be set to something that is not a coderef to prevent a valuable cached database from getting accidentally clobbered with
%CachedData = ()
if you need more direct access to the internals of the cache, the cache object is an arrayref, and future versions of this module, if any, will not re-order the indices of its contents.
UNCACHE method is provided by the Tie::Hash::Cache::MRU object that removes the keys given as its parameters from the cache.
tied(%cache3) -> UNCACHE(@KeysThatHaveChanged);
UPDATE method is provided that takes a hash as its argument list and replaces any of the keys that are already in the hash, with the provided key. Keys that aren't in the cache already are ignored.
tied(%cache3) -> UPDATE(%DataUpdate);
CACHE method is provided that takes a hash as its argument list and stores the provided data in the cache, without writing through to the STORE function or the provided HASH.
tied(%cache3) -> CACHE(%NewData);
%NewData has more than SIZE keys, a cache rotation will get triggered soon, but the provided data will all be available in the cache.
instead of using a less efficient data structure or going to all kinds of contortions to make sure than the N+1st element is deleted, we simply maintain two caches, the current and old caches. When an element is not in the current, we look in the old before fetching from the source. When there are more than SIZE keys in current, we rename current to old (throwing away the old old in the process) and start with an empty current.
This gives us memory usage that varies between SIZE and 2*SIZE cached elements, and real simple bookkeeping.
When LIFE has been defined, we delete entries from the LIFE hash when entries exist in old but not in current, before rotating the caches.
each to iterate over a cached database goes directly to the data, without affecting the contents of the cache.
david l nicol, <email@example.com>
Copyright (C) 2004 by david l nicol
This library is free software; you can redistribute it and/or modify it under the GPL or the AL.