Initial release: April 2008
Kris
0.99.7
- class CacheMap(K, V, alias Hash = Container.hash, alias Reap = Container.reap, alias Heap = Container.Collect) ¶
-
CacheMap extends the basic hashmap type by adding a limit to
the number of items contained at any given time. In addition,
CacheMap sorts the cache entries such that those entries
frequently accessed are at the head of the queue, and those
least frequently accessed are at the tail. When the queue
becomes full, old entries are dropped from the tail and are
reused to house new cache entries.
In other words, it retains MRU items while dropping LRU when
capacity is reached.
This is great for keeping commonly accessed items around, while
limiting the amount of memory used. Typically, the queue size
would be set in the thousands (via the ctor)
- this(uint capacity) ¶
-
Construct a cache with the specified maximum number of
entries. Additions to the cache beyond this number will
reuse the slot of the least-recently-referenced cache
entry.
- void reaper(K, R)(K k, R r) [static] ¶
-
Reaping callback for the hashmap, acting as a trampoline
- uint size() [@property, final, const] ¶
-
- int opApply(scope int delegate(ref K key, ref V value) dg) [final] ¶
-
Iterate from MRU to LRU entries
- bool get(K key, ref V value) ¶
-
Get the cache entry identified by the given key
- bool add(K key, V value) [final] ¶
-
Place an entry into the cache and associate it with the
provided key. Note that there can be only one entry for
any particular key. If two entries are added with the
same key, the second effectively overwrites the first.
Returns true if we added a new entry; false if we just
replaced an existing one
- bool take(K key) [final] ¶
-
Remove the cache entry associated with the provided key.
Returns false if there is no such entry.
- bool take(K key, ref V value) [final] ¶
-
Remove (and return) the cache entry associated with the
provided key. Returns false if there is no such entry.